Digital Velocity
Skip to main content
Digital Velocity Podcast Hosted by Tim Curtis and Erik Martinez

55 AI Ethics and the Pursuit of Good Tech - Olivia Gambelin

This week on the Digital Velocity Podcast, Olivia Gambelin of Ethical Intelligence joins Erik and Tim to discuss how businesses can apply Artificial Intelligence Ethics while pursuing technology, stimulating innovation, and building consumer trust.

Generally, ethics involves values or moral principles that guide behavior. In business, ethics follows that same paradigm. Olivia says, “But in the context of artificial intelligence and business, how I like to define ethics is a decision analysis tool. What that means for me is when I am working with a client, we are analyzing their decisions to understand and assess how in alignment those decisions are with their core values that they have set out to implement.”

Artificial intelligence is generating new conversations and questions when it comes to ethics and trust within the framework of business. Olivia explains, “…when it comes to AI Ethics, the first thing that you're doing there is actually figuring out, well, what are our foundational values that we want to work on? What are the values that are going to gain trust with our user base? The whole fact around building that trust comes to basically saying what you're going to do and then going and doing it.”

AI Ethics brings great opportunities for brands to build technology that builds trust with consumers and also stimulates innovation and creativity. Olivia says, “Now, we have the opportunity in our digital age for two very cool things. On one hand, we can look at designing our technology and asking ourselves the question, how can I create technology that helps me in that pursuit of the good life, not one that helps me with commodity and helps me get my Amazon package in 30 minutes? But what kind of technology helps me in the pursuit of my own good life, and hopefully those around me as well? But how do we also design in the pursuit of good tech, technology that's in alignment with our values?”

Listen to this week’s episode to learn more about AI Ethics in the context of business.

About the Guest:

Olivia is an AI Ethicist who specializes in the practical application of ethics to technological innovation and the founder of Ethical Intelligence. She is the founder and CEO of Ethical Intelligence, an AI Ethics advisory firm providing Ethics-as-a-Service through the world’s largest network of Responsible AI practitioners. Olivia works directly with business leaders on the operational and strategic development of Responsible AI and has advised various organizations from Fortune 500 to Series A startups across the healthcare, financial, and media sectors in utilizing ethics as a decision-making tool.

In addition to her work as an ethicist, she is on the Founding Editorial Board for Springer Nature’s AI and Ethics Journal, Co-Chair of IEEE’s AI Expert Network Criteria Committee, and is on the Advisory Board of the Ethical AI Governance Group (EAIGG). Olivia splits her year between San Francisco where she is an active member of Silicon Valley’s ecosystem of startups and investors, and Brussels where she advises on AI policy and regulation.

Transcript

Erik Martinez: [00:00:00] Well, welcome to today's episode of the Digital Velocity Podcast. I'm Erik Martinez from Blue Tangerine.

Tim Curtis: And I'm Tim Curtis from CohereOne.

Erik Martinez: Today, we welcome Olivia Gamblen to the show. Olivia is an AI Ethicist who specializes in the practical application of ethics to technology and artificial intelligence innovation. She's the founder of Ethical Intelligence, an AI Ethics advisory firm, providing Ethics-as-a-Service. Olivia works directly with business leaders on the operational and strategic development of Responsible AI and has advised various organizations from [00:01:00] Fortune 500 to Series A startups across the healthcare, financial, and media sectors in utilizing ethics as a decision-making tool.

Olivia, welcome to the show.

Olivia Gambelin: Thanks so much for having me here today, guys.

Erik Martinez: When we met at dinner, I didn't even know such a job existed, when we met at dinner at MAICON, so I am fully fascinated with the concept of what you're doing with ethics and specifically, AI Ethics. But before we kind of delve into, like, all that fun stuff, can you give our audience a brief synopsis of your journey to starting Ethical Intelligence and specializing in AI Ethics?

Olivia Gambelin: Absolutely. And Erik, you're not alone. I'm usually one of the first Ethicists people will meet, which is always a fun conversation. It's shifted from what the heck is an Ethicist to that's really fascinating. How did you get to that point? So, it's been exciting for me to see at least the general societal shift to [00:02:00] recognizing Ethics in Artificial Intelligence.

So, I've been at this for about just over six years, probably just about seven now, as a practicing ethicist in the space of artificial intelligence, which sounds very funny to say, but it makes me one of the veterans. This is still a very, very new space. I came to it from a background of more digital privacy research, as well as digital communications and strategy. So, always in the tech field, always in the tech sector. But I had based all of my academic studies in Ethics because I found the field absolutely fascinating.

Little by little, I was slowly pulled towards this field of Ethics, and for lack of a better term, I had a very cheesy light bulb moment, actually sitting in a conference in Brussels, which is where I am recording today, where I heard the term data ethics for the very first time. And that was my light bulb moment of, Oh my goodness, this field that I'd been [00:03:00] working in actually connected with what was my intellectual love of ethics. And so I did not look at any career prospects and decided then and there that I was going to become an ethicist and stubbornly drove my way forward.

It's been quite an interesting journey starting out in a field that doesn't exist, in a market that doesn't exist, and starting a company in that. So, Ethical Intelligence has been through many different phases over the years. We're currently looking at a whole new phase of the company again, but I think the beautiful thing about that is we're able to grow up with the industry and the market and adapt as we go. It's all been a very cool experience, I guess, to summarize, I was a philosophy nerd that wanted to work in tech and that's how I found AI Ethics.

Erik Martinez: It's a great way to summarize it. I've been thinking about ethics and the definition of ethics and at least here in the United States. I think most people's frame of reference is when we hear about on the news, the Senate Committee, or the House [00:04:00] Committee, Ethics Committee are investigating somebody, and I think that gives a really narrow view of what Ethics really is. So, maybe if you wouldn't mind taking a moment just giving the audience, what's kind of that really nice synopsis of what ethics really means. In terms of what you do today.

Olivia Gambelin: Absolutely. And it's something that I often have to establish right at the beginning, both of with talks, when I start working with new clients, because ethics itself, it's a term that we all know, but it's not really one that we're used to defining. So, if I were to ask you, Erik, and you, Tim, what your definition of ethics is, you would probably give me different definitions. That's because you are creating this definition off of your own relationship with say, your understanding of values and morality and your moral compass. There's nothing wrong with that. It's actually a very cool thing that ethics can be so personal to us.

But in the context of artificial intelligence and business, how I like to define ethics [00:05:00] is a decision analysis tool. What that means for me is when I am working with a client, we are analyzing their decisions to understand and assess how in alignment those decisions are with their core values that they have set out to implement. By that definition, and it's not the traditional definition, and you're not going to find that in any ancient philosopher textbook in terms of ethics. I can give you the old-school definition if you'd like, but for me, having something very practical is calling it that decision analysis tool.

Erik Martinez: That's a great way of framing it. We all need some way to make decisions that's not arbitrary. It's grounded in some set of guidelines or principles or facts that we want to work with. So, I think that's a fantastic definition. Thank you.

Tim Curtis: We talk a lot about transparency and trust building, and Erik and I work with a lot of retail brands. You mentioned data privacy and all the [00:06:00] concerns that have been sort of creeping up the last couple of years. What I like to say, I chair a committee in Washington on privacy, and one of the things that I really like to say about kind of the environment that we're in, both the legislative and the legal environment we're in, it really testifies to the lack of trust.

And it really stands in stark contrast to, you know, maybe historically what have been institutions, both business and even creeps into the government realm. But there's just generally a tremendous lack of trust with anything related to privacy, data, even the traditional thoughts or realms of ethics that we would probably know historically. There just seems to be a lack of trust there as well.

And it is something that you're beginning to hear voices talk about crisis. That it's at a crisis level. And we don't fully understand that when we lack trust and we lack transparency, it's sort of abrasive to the human process. For brands that are trying to navigate all of this, [00:07:00] they're getting new state-level privacy legislation. We're slowly working on something in Washington to kind of come alongside. Of course, I was involved in some of the GDPR safe harbor stuff back in the day.

And now, as we're kind of working through all of this, these brands are trying to come out on the other side and present sort of a plan, if you will, to maintain usage of data in order to deliver the kind of materials or the kind of content that consumers want. But they want to do so in a way that honors the privacy legislation, that honors the spirit of the privacy legislation as well. How do you begin through all of that, how do you begin to chart a course forward for them to advise them, here's what I would do. Where do you go with that?

Olivia Gambelin: We'll focus in on privacy here and I'll touch as well about how privacy builds trust with your customer base. Let's say a company comes to me and they say, Hey, we want to implement privacy. My first set of questions are going to be looking at, well, what kind of [00:08:00] approach is going to fit your company culture and what you already have in place? What I mean by that is there's really kind of two approaches you can take to implementing values. One is a risk-preventative kind of risk mitigation focus, and the other one is much more focused on by design innovation.

So, think of it in terms of how can we prevent harms happening around privacy. That means data breaches. That means violation of data privacy rights, manipulation of users, and so on. When you are coming at it from a risk prevention standpoint, you are going by the letter of the law. You're looking at the GDPR. You're looking at the CCPA. Those are going to be really kind of your two highlights that you're going to be working towards. But you're really in this mindset of how do I make sure that I don't cross the line in terms of the legal line, but also how do I make sure that I am protecting the data, I am protecting the privacy of my users?

If you flip that onto the [00:09:00] other end, you have the innovation standpoint, and what that means is you are trying to incorporate privacy by design. So, you're looking from the very start how do we build end-to-end encryption. Is this really a data set that we need to use? Are there other ways that we can protect our users? For example, I know a lot of companies that have started experimenting with synthetic data to help protect the identities of their users while they're building test models. So, there's a lot of different techniques that you can do in that direction.

It results in different outcomes and different products, different features, but that kind of approach, I always advise to have a dual approach, but those kinds of approaches, that's the first set of questions that I'm looking at for something around privacy. Are we preventing or are we aligning with that value? What's going to fit your needs now as a company, and what's going to fit the expectations of that customer base that you're building for?

Tim Curtis: So, data privacy, [00:10:00] GDPR was sort of like the large exhale, right? We all geared up and then it really wasn't anything here stateside. It just didn't really have much in terms of activity here in the States. CCPA is a little different. Obviously, a lot of rushing to get things in place and kind of looking around the rest of the country, what the other state legislatures were developing. Now we have CPRA. right on its heels, California Privacy Rights Act. It's going to dial it up a little bit more.

But now brands, we think we're getting maybe a little bit of our head around at least, probably more risk management of data privacy. But now they're turning their eyes towards AI. They're recognizing that AI is popping up everywhere. They've got a vendor community or a tech stack that is in the process of utilizing AI. They don't really always have visibility into the workings of AI.

But if you have worked with AI for any length of time, you understand the framework and what's missing if you aren't having conversations related to ethics. [00:11:00] Now as we pivot that, imagine those very same brands now having questions about ethics and how they begin to establish trust in ethics. Like they were attempting to do in data privacy. What's that look like?

Olivia Gambelin: So, that thankfully, looks a little bit the same. We're still looking at that risk versus innovation question as a baseline. But then from there, what we're looking at instead of data privacy, which focuses specifically on privacy itself, when you're moving into ethics and AI, you're opening up the playing field. You're starting to look at well, what are our foundational values? Privacy should be one of them, but are we also needing to incorporate factors like transparency and explainability? Are we looking at fairness and bias testing? Are we looking at accountability and responsibility?

I know these all sound like buzz terms, but just as you heard me talking about data privacy, that was only scratching the surface of data privacy, which was one of those values. Every single one of the [00:12:00] other values that I just named have very rich fields supporting them and an immense amount of research happening from a Responsible AI. That's the field that I come from, from Responsible AI standpoint.

To bring it back around, when it comes to AI Ethics, the first thing that you're doing there is actually figuring out, well, what are our foundational values that we want to work on? What are the values that are going to gain trust with our user base? The whole fact around building that trust comes to basically saying what you're going to do and then going and doing it.

There's this risk of something called ethics washing. It's also known as bluewashing. Well, it's very similar actually to the term greenwashing. And now I'm just naming terms, but greenwashing happened a lot around ESG, around that E-factor, around environmental. Where companies were saying, Oh, look, we're very environmentally friendly. Where if you pull behind the curtain, they were anything but.

That happens as well in ethics where a company will say, look, we've got a Responsible AI policy, and we have these external ethics values. And then you pull behind the curtain, you say, there's nothing here though. You [00:13:00] have a list, it's like a startup writing on the wall. Our values are happiness, playfulness, and innovation. You're like, but that doesn't mean anything. You're not doing any of that.

So, what you're trying to do when you're building trust is prevent that ethics washing where you say, we are implementing strict bias testing. Here, you can actually see the actions that we're doing, and here's the output to show that we are actually doing what we say we're going to. And that's really a key trust builder.

Erik Martinez: Do you have any good examples of what companies are doing to demonstrate their commitment in the practices behind what they're doing?

Olivia Gambelin: It's a bit tricky because when Responsible AI and Ethics first started, it started more as a PR stunt. So, there were a lot of companies writing a blog post or two and putting out a list of values and saying, yay, we did it. Now, it's actually becoming much stricter. So, companies that are doing it well, actually [00:14:00] are putting out reports and case studies.

You can really tell the difference between a case study, case study I'm using air quotations here, a case study done by a company that is doing this more as a marketing PR stunt versus a company that has actually put a framework into place and is actively showing here's the impact, the before and after impact. Here are the steps that we went through. Versus the ones that are more, let's say, surface level, there's no substance. If you were going to try and take that information and use it, you'd have nowhere to start.

So, that I would say is probably one of the biggest indicators around whether or not a company is taking Responsible AI seriously. I should preference here. I use Responsible AI and Ethics interchangeably. Ethics is a practice within Responsible AI. Responsible AI is the industry name. But having those case studies, a lot of companies are trying to make names for themselves through those case studies. The ones that have substance and meat behind them, are the companies that are actually doing this well. But it can be hard. It's very easy to put out a list of [00:15:00] values and do nothing. Let's put it that way.

Erik Martinez: Absolutely. It just kind of brings the question up of, as you're starting to work with a brand on talking about the practices that they should be implementing in relationship to their values, what's usually like the lowest hanging fruit in terms of like getting things started? And within the organization who champions that?

Obviously, the executive team has to be committed to doing it. We understand that, but they're not necessarily the ones that have to do the work. So, how does that play out as you start working with an organization and say, yes, we're going to start implementing some of the things? Where do they start internally as an organization to make it happen?

Olivia Gambelin: The lowest hanging fruit, but I want to say the most important foundational aspect of this, what is quite literally a change in many ways. It's a change in processes around how using and building [00:16:00] AI. It's a change in culture. It's a lot of change happening to get this shift onto Responsibile AI.

The lowest hanging fruit is quite literally starting with the people. You can't have data if you don't have people, you can't have AI if you don't have data, and you definitely can't have AI if you don't have people. So, the people really in the organization are at the root of either your success or failure when it comes to implementing ethics. And these are new skill sets and they're new conversations. So, we as people very naturally engage in ethical decision-making on the day-to-day basis. We're not necessarily aware of that decision-making process.

So, giving your teams training on the ability to recognize when they are making an ethical decision is the best place to start because now, instead of this being, let's say a blank slate of what in the world does this mean? I don't know where any of this is. You've equipped your teams with the ability to pinpoint, Hey, I think I need to talk to someone at this point in time. Or, hey, something [00:17:00] feels off here, or I need a framework. It highlights the points in time when the ethical analysis as the decision analysis tool needs to happen.

It also helps your employees feel empowered instead of this growing fear of Ethics is scary and AI is scary and I'm afraid I'm going to do something wrong and we're creating all this risk and harm and I don't know. It creates this empowerment within the company for that cultural shift saying, no, there are solutions that we can do, and if we work together, they're very easily accomplished solutions. We just need to all be speaking the same language to be able to basically surface those solutions that are sitting right in front of our eyes.

Tim Curtis: I'm going to pivot off that empowerment word. Got a couple of things bouncing around my head here, but the first paradox I'm just sitting here thinking about is that typically when a business engages, in not the AI Ethics, but in the traditional field of ethics or in any kind of governance, [00:18:00] it's really an engagement where they are being constrained. They are being pulled back and constrained into, either an area of appropriateness or their operations have strayed out of that.

So, it's interesting to hear you talk because you don't, by any stretch of the imagination, your firm doesn't walk away from innovation. You actually embrace it. The paradox that I think about that is that by putting in a framework, an ethical framework in which to operate, you actually free yourself to be innovative and to be unencumbered. And I know that probably sounds a little backwards to some, but I think when the boundaries are clearly defined and you know where the open space is, there's a freedom to then kind of go and chase that.

I love the fact that you have set that up as a goal and that throughout your site and throughout your materials that innovation theme is called out specifically. Because AI at its root, especially [00:19:00] generative AI, is immensely innovative and immensely iterative in terms of what it can do to upskill you to upskill your company. That to me is key. How do you empower someone? Let's say you defined those boundaries. How do you then turn and encourage them to pursue innovation and AI?

Olivia Gambelin: Yeah, which is a great question because Ethics definitely can be at times almost pigeonholed into, I'm going to come in and tell you no. I can guarantee you there's only a handful of times I've ever had to tell someone flat-out no, don't do that. One of which was no, don't put private data into ChatGPT. Just don't do that.

Tim Curtis: Good advice, by the way.

Olivia Gambelin: I used to say, I've never told someone no, and then I had an engineer come to me and he's like, I took this private data set and it was on healthcare data and it was really fascinating. When ChatGPT came back, I was like, stop, don't, no more, no more. That one's a hard no.

But as an Ethicist, I don't tend to tell someone [00:20:00] no. Instead, when someone comes to me and we're working through a decision or we're working through some feature design, let's say, or implementation, and they tell me something where I'm like, it's not as aligned as it could be. What I'm doing is I'm asking them well, what if you looked at it through this lens? Or can we push this further? Is there another step that we can take here that brings that innovation to another level? Part of that empowerment and that narrative that I work with clients on is these aren't constraints. What we're doing is we're targeting that innovation and creativity.

We'll use another data example here. So, think about you are designing a system and you've got three different data points that you're pulling in. But we look at the data points and we say, hey, that third one, based off of your value and commitment to privacy, that third one is going to be really tricky to collect, and that might actually start to violate your users' sense of privacy. Do we need that third point?

And if the team comes back and like, yeah, we really need [00:21:00] that third point. Okay. Well, is there another way we can collect it? No. Okay. Well, is there another data point in parallel that could get us that same information? Well, yeah. Now, we're stepping into a whole nother innovation cycle. We're having to push ourselves to do better. It creates better technology at the end of the day because you're not just stopping at what's technically feasible. You're looking at this is technically feasible, but what is going to better serve my users at the end of the day? If it better serves my users, that's a better product and solution.

Tim Curtis: One of the exercises, and totally unrelated necessarily to AI, but one of the exercises that can be so powerful is looking at first-party or zero-party data they possess and finding ways, like you said, is there another way to gain visibility to that same insight? There oftentimes is. And when you begin to pivot and you look at that data differently, it's sort of like, [00:22:00] holding some crystal or something up to a light and you see it refracted and then you change it and another refraction.

It's interesting how that can be innovative just in itself, and you can gain insights from things that you haven't done. It's really about kind of stepping outside of your traditional thinking parameter. But again, by establishing those boundaries, you're very clearly giving them permission to do something different and you're encouraging along the process. I love it. Love it. Love it. Love it.

Erik Martinez: It's absolutely fantastic. So, I want to pivot to this topic of bias and ensuring inclusivity. We hear about bias and depending on where your personal worldview sits, bias means different things. Right? But I understand that as we're training these tools, how we limit the data set that they're training on really has a massive impact on the results. And if we open that training to a broader data set, we change the results that we're going to get.

So, when you're working with companies on implementing ethical practices and [00:23:00] dealing with this specific issue. In terms of their audience, what do you talk to them about? What types of questions are you asking to ensure, or at least to minimize, maybe we don't ever completely eliminate bias, but working towards minimizing bias from happening when we're working with these tools?

Olivia Gambelin: So, the first point that I start with there is, and it's not probably the most obvious, but it's actually just defining what fairness means for the company. What I mean by that, let's say the ethical value that you're using to combat unwanted bias is fairness. You're looking at how do I utilize this concept of fairness to ensure that my product, my solution is fair. The unwanted bias has been removed.

What does fairness mean though? That is a huge question because you can approach it from, well, it's equal opportunity, equal outcome. One of my favorites that I heard once [00:24:00] was equal quality of service. There's different ways to define fairness and there's different ways that fairness is defined per product per user base.

So, one of the most important things to do is actually sit down as a company and have that decision of, well, for us, fairness means, let's say equal opportunity. What that does is it allows us to create this North Star that everyone is now focusing on, but it also helps us understand that we may have some blind spots when it comes to say equal outcome.

We do need to check that because there will be blind spots there, but our definition is equal opportunity. Cool. How do we break that down then? What does that mean per department? And what does that mean per product, especially the bigger companies, what does it mean per product or say key solution?

Once you have that living definition in place, what we're then able to do is actually work on that employee level, down in the trenches, say with the data scientists and the engineers, working on what kind of fairness metrics do we need [00:25:00] to be able to track per that definition? And what is our level of risk or appropriate bias that we're willing to take on? What can we do because it is impossible to completely eliminate bias? What can we do?

So, say we decide that we have the system. We understand that it does potentially have a bias towards gender at some point. What human checks do we then put in place to ensure that again, the further decisions that are made can be caught if there is a significant risk around that gender bias? It's putting all these checks into place really.

But I've seen it happen many times, especially in the larger companies where they have very, very enthusiastic data science teams and data scientists that are trying to put together a solution around fairness, but they're doing so in a siloed effect. So, one person is working on one metric and two seats down their colleague is working on a completely different metric or a completely different set [00:26:00] of metrics. You always want at least three fairness metrics that you're measuring for. Again, to catch blind spots.

But if you don't have that North star, that North definition of fairness, and you don't have points in time where your teams come together and discuss, is this working, have we actually monitored the model once it's been pushed to production once it's live. If you don't have those discussions, there's no way that you're going to be able to tell when that bias is happening, other than you've got a customer complaining to you that they've faced some type of scrutiny.

I've kind of completely gone down the rabbit hole here, but to summarize, start by defining fairness, and that itself already opens up the channels of communications within your team to be able to start working towards a solution that will work for your company.

Erik Martinez: That's incredible. I mean, one of the things I hear you saying over and over in the course of our discussion today is start with the question, what this means against your company. What are your values? What does it mean to do [00:27:00] this within the context of whatever we do and how we portray ourselves in the world? And then find some way to establish some way of measuring that.

Olivia Gambelin: Yeah.

Erik Martinez: I mean, it's just business process, right? When you start talking about ethics, like, oh, we don't equate business process with ethics, we think about they're two separate things and now we're trying to mash them together. And the reality is, you're really promoting the idea that these two things should work hand in hand, and it's just part of your planning process that you should be doing anyways. That's a really, really powerful statement.

I think the challenge that we all face, when we're running a million miles an hour in our everyday, very busy lives is making sure that we slow down and take some time to think about these things in context, put some things in steps. The other thing I'm hearing you say, it's like, you know, this is a journey, not a [00:28:00] destination. You can take small practical steps and if you're constantly working those steps, over time you're going to have a very great outcome, whatever that outcome is that you envisioned at the beginning.

So, I think those are really, really important questions. So, I'd like to pivot again and move us back into the world of Ecommerce retail, which is where most of our audience lives in some way, shape, or form. All of us are interacting with these big companies, right? We're interacting with Amazon or the walmart.coms or individual brand sites. As we start looking towards the future, where's this going to evolve? Where are we going to head in terms of integrating Ethical AI with Ecommerce?

Because if there's an industry that's ripe for problems in using this type of technology, retail is it. Like Tim said earlier, we're trying to grab as much data as we can, so we can personalize the experience and [00:29:00] yet try also at the same time to respect privacy. And data security and all of those things. How is this going to evolve? What do the Ecommerce platforms need to do? What do business leaders in digital marketing spaces need to do to make sure that we're headed in the right direction?

Within our organization, we were having a conversation. So, Google, the great walled garden of Google, in their paid search efforts to combat the intelligent tracking prevention, have implemented some tools where you say, hey, you're going to pass some more data from the transaction to Google so we can feed that to our AI, so we can serve up better ads and you get better return and all that stuff.

And when I asked my team, I said, okay, but should we be doing that? Should we be sharing that information with Google in pursuit of that transaction? I think you've given me a few questions today to go back to my team with and say, is there another way? Can we get [00:30:00] to the same spot a different way without just saying, hey, you know, Olivia, you just transacted on this website and I just passed your email address? Yeah, it's hashed and it's encrypted. And I've just passed your phone number and all your personal information over to Google, who I now have to trust is going to do something responsible with the data that we just passed to them. But you may not even know I did it.

Olivia Gambelin: I would say, especially in the world of Ecommerce in light of all of this, one of the biggest questions. Now, I'm laughing because you've called out the fact that I do everything with questions, and through questions, that's the philosopher in me. So, the biggest question that Ecommerce is going to be facing in these next few years is this trade-off between privacy and personalization.

What happened when we first started in this Wild West, let's say, of being able to collect data and information on our target audience, it really was a Wild West, we had no understanding truly of well, [00:31:00] what feels like a privacy violation and what is too much information. Slowly, but surely our market and our society and technology users in general have become more digitally literate and are able to understand, well, I don't know exactly what this company is doing, but I can tell they're taking my data based off of these few indicators, or based off of the personal ads that I'm getting and I don't like that, and I'm going to control it, especially the younger generations.

They know, primarily on social media, but they know how to manipulate their online data to be able to hide facts or to be able to showcase different things. They quite literally will groom their algorithm. Again, I'm talking about social media feeds, so not necessarily Ecommerce, but they will groom their algorithm to have it show them the kind of videos that they want to see. That can happen with personalized ads too. You can groom your algorithm by either showing different parts of you online or not. Younger generations really are starting to learn this almost new [00:32:00] language, this digital language, and ability to do that. I digress.

The privacy and personalization is this trade-off of how much personalization is actually good personalization. Because there is a line that gets crossed where me as a user, I'll see an ad and I'm like, I don't want that. Maybe I wanted that product, but I never said to anyone that I wanted that, or that was just a thought in the back of my head and now seeing it on my news feed makes me feel uncomfortable because now I'm here trying to figure out where I let my data out? And I'm trying to walk back where did I possibly mention that I wanted a new backpack?

I didn't mention that. I just know that my old one is getting torn up and I haven't even talked about this and yet I get an ad for a new backpack I feel uneasy. And so, when it comes to Ecommerce having those targeted ads is very important because it does catch that, oh, I was just thinking about that or I was just searching that. That is a good aspect. But the deeper in that you go [00:33:00] and the more niche that you go in those targets, isn't necessarily a good thing.

I think what we really need, especially out of the Ecommerce field, is to track whether or not that personalization helps. Are the numbers significant enough to justify the data being taken in? If it's a one-percent increase, does that really justify, say, taking someone's hometown birthplace, so on, for information? That doesn't seem like a good enough trade-off. So, I would say that question of privacy versus personalization, as an Ecommerce brand especially, how far are you willing to go in terms of that personalization, knowing that there is a line that you are up against? And how far are your customers willing to receive that? One of the best things is just to talk to people that you're trying to reach, to understand, hey, is this helpful when you see these ads or, is this too much?

Tim Curtis: Yeah. We kind of go back to that trust deficit again, [00:34:00] where at one time you had the social media giants and Google, and at the same time, they're running campaigns trying to reassure users that they have their best interest in protecting your data. They're also running campaigns exercising passive listening, all the things that we would typically chalk up to highly invasive and by any measurement of consumers today are unanimously placed in that category of a violation. Like you have violated the intent of this relationship and you are pulling in data you have no business.

When we start talking about and pivot and these same players are now investing very heavily in some cases, trying to catch up on the AI race. We don't trust them for good reason. And you see the younger generation, as you mentioned, sort of a herd mentality kind of creep in and what they're doing, it's pretty well documented. They're not filling out certain forms and profiles. [00:35:00] Or if that profile or form is needed, they'll fictionalize data at higher rates than anybody has ever done it.

So, what really is happening is we're getting a lot of garbage into those processing. So those algorithms they are getting baked as you mentioned. So, at the end of it all, where does it lead us? It leaves us with AI that cannot be as innovative and targeting because it's crossed boundaries. And so, the pushback is the consumers just quite frankly, not wanting to. be a victim. And I can't say that I don't blame them. I've seen it. There's a lot of things I've seen over the years working in this space that has just left you cold. I mean, just thinking, how in the world have we gotten to this point?

So, I think we're all very eager for the rise of Ethics, both in the environment and in the digital behavioral area, to ensure that what we're doing is we are establishing boundaries and staying within those boundaries. And I think it's the only way that digital marketing is going to rebound any [00:36:00] credibility. It's an unfortunate place we're in, but the Wild West is over. That's done, and boy, did it end quickly. Probably for good reason. But yeah. Well, before we kind of close out here, are there any other pieces of advice or pearls of wisdom that our listeners might need to hear from you today?

Olivia Gambelin: Really anything now that I do in this space, is touching on this idea of the pursuit of good tech. So, I promised you I would talk about my ancient philosophers. Ethics originated from this idea of pursuit of the good life and that good life is one that is fulfilling. It looks different person to person, but it's really defined by the values that that person holds. So when you're living a life in alignment with your values, when you're living a life in a way that is reflecting what you value in life, then that's where you find fulfillment, that's where you find purpose. That's how you achieve a good life on a human level.

[00:37:00] Now, we have the opportunity in our digital age for two very cool things. On one hand, we can look at designing our technology and asking ourself the question, how can I create technology that helps me in that pursuit of the good life, not one that helps me with commodity and helps me get my Amazon package in 30 minutes? But what kind of technology helps me in the pursuit of my own good life, and hopefully those around me as well? But how do we also design in the pursuit of good tech, technology that's in alignment with our values?

I always want to end with that because it's a hopeful note. It is a rallying call in the sense that we have the potential to do this. We've been doing this as humans for millennia. Now, we're translating it into the digital space and it opens up so many more opportunities for us as people to better our own lives and better the lives of those around us if we really, truly embrace that pursuit [00:38:00] of good tech.

Erik Martinez: That's a fantastic way to close. Well, Olivia, thank you so much for taking time out of your schedule and joining us on the show today. If our listeners want to reach out, what's the best way to get in touch with you?

Olivia Gambelin: You can find me at oliviagambelin.com. You can also find me on LinkedIn under the same name, or you can check out Ethical Intelligence. That's ethicalintelligence.co. We've got some very cool stuff going on there as well.

Erik Martinez: Well, thank you so much again for your time. I think we could have probably spent all day talking about this and it would have been a lot of fun. Let's make a plan to check in and like 6 or 7 months and talk about this again.

Olivia Gambelin: I will also have a published book at that point.

Erik Martinez: When is that releasing?

Olivia Gambelin: Knock on wood, June 2024.

Erik Martinez: We would be happy to talk about the book when it comes out. Well, everybody, thanks for listening to today's episode of the Digital Velocity Podcast. I'm Erik Martinez from Blue Tangerine.

Tim Curtis: And I'm Tim Curtis from CohereOne.[00:39:00]

Erik Martinez: Have a great day.

Hosted By

Blue Tangerine Logo
CohereOne Logo