“We all know women deserve less money than men, right?”
If machines could talk, that might be a direct quote from the algorithm that determines credit limits for the Apple Card, a new financial product from the tech giant.
In late 2019, entrepreneur David Heinemeier Hansson (cofounder of Basecamp) tweeted about how he had been approved for 20 times more credit than his wife when they both applied for the Apple Card, due to the algorithm that regulates how credit limits are determined.
The entrepreneur took to Twitter, where his story went viral—prompting a response from Apple.
What happened next has serious implications for business leaders in the age of artificial intelligence.
When asked, multiple Apple reps didn’t know how the algorithm worked, Hansson reported on Twitter. And, while they were respectful of concerns around discrimination, they blamed the algorithm for the issue.
I wasn’t even pessimistic to expect this outcome, but here we are: @AppleCard just gave my wife the VIP bump to match my credit limit, but continued to be an utter fucking failure of a customer service experience. Let me explain…
— DHH (@dhh) November 8, 2019
As Hansson’s complaint moved up Apple’s chain of command, it appeared management didn’t have any visibility into how the algorithm worked either. Nor did they know how to fix the problem.
All the while, Hansson torched the brand online.
So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.
— DHH (@dhh) November 8, 2019
The issue became such a big deal that Apple cofounder Steve Wozniak chimed in, saying the same issue happened to him and his wife.
To be fair, this is an even more egregious version of the same take. THE ALGORITHM is always assumed to be just and correct. It’s verdict is thus predestined to be a reflection of your failings and your sins. https://t.co/BF4BWubpkl
— DHH (@dhh) November 9, 2019
It turned out, Apple didn’t even build the algorithm, either.
Apple isn’t a bank, so it partnered with Goldman Sachs to launch the Apple Card. Goldman was responsible for creating the algorithm.
So, Goldman Sachs got put on blast, too. In response, Goldman released a short statement saying it didn’t endorse gender bias. The ineffectual statement didn’t go over…well.
Apple and Goldman Sachs have both accepted that they have no control over the product they sell. THE ALGORITHM is in charge now! All humans can do is apologize on its behalf, and pray that it has mercy on the next potential victims. https://t.co/LFyPYbtRlh
— DHH (@dhh) November 10, 2019
Oh, Goldman is also now subject to a probe by the New York Department of Financial Services over the issue.
The final straw came when Wozniak also made it clear that Apple, the company he cofounded, bore some blame for the incident.
As a former senior Apple employee, this @applecard issue is very disappointing to me. I feel betrayed. Apple is positioned as the good team in tech. And I believe they are. But this is an issue that they have to fix. They have to think different and be better.
— Dave Edwards (@dedwards93) November 9, 2019
And this all played out in a massively public conversation.
Is your head spinning yet?
A single instance of algorithmic bias caused:
- Serious moral issues by intentionally or accidentally enabling financial discrimination against women.
- Major damage to brand equity for Apple (which is famed for its brand strength) — over an algorithm it didn’t create and didn’t understand.
- Possible legal repercussions for Goldman Sachs — over an algorithm it did create and didn’t understand.
If you’re a business leader who doesn’t think AI is worth paying attention to, then you’re not paying attention.
If you’re not using AI right now, you may be considering a strategy and adoption plan. And, even if you’re not planning for AI, I guarantee you at least one of the products or services your business works with uses algorithms.
Which means bias in artificial intelligence presents a range of complex challenges for you and your company. Challenges you’re probably not prepared for right now.
Bias in AI usually happens because of the data used by the AI system, not the system itself.
In the case described here, the data might intentionally or accidentally reflect human biases about different genders, races, or groups. For whatever reason, the data processed by the algorithm is skewed in a way that results in women being perceived as more of a credit risk than men.
But the data can also be biased in more innocent ways.
Say you purchase an AI system trained to write highly engaging social media posts. The technology looks great: it’s been trained on over two million successful posts and is going to up your social engagement while cutting time spent on social by 90%.
What’s not to like?
Do you know anything about the two million posts the AI tool was trained on? Were the posts from businesses that are like yours? Were the posts in the language you do business in? What platforms were the posts on?
Good luck finding out. And you better pray it works for you after your name goes on the invoice.
The reality is this:
If your company’s products use AI…
If you are actively investigating AI for your brand…
Or if you partner with anyone using AI…
You’re in the exact same position that Hansson and his wife were in.
You don’t know if the data used by algorithms that affect your brand is biased.
If it is biased, you don’t have a full picture of how or why the data is biased.
And you probably don’t have a strategy for what to do and say if something goes wrong because of bias.
You’re not alone.
This is a really hard problem that even the people building the world’s most sophisticated algorithms haven’t figured out.
No one has the answers yet.
But it is time you start asking questions to better understand how your brand interacts with algorithms.
Because if your brand gets entangled with algorithms that have serious bias problems, it’s going to come back on you — likely at lightning speed, in a public forum, where all your customers can see.
And when you’re asked about who bears responsibility for a sexist, racist, bigoted, or just plain wrong machine-assisted outcome…
Saying “It’s just the algorithm” isn’t going to cut it.
This article first appeared in www.marketingaiinstitute.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: email@example.com or visit www.groupisd.com