Is Machine Learning Ethical? Test your Skills with 3 Case Studies

Recently, I did a presentation on machine learning at Clickfunnels headquarters in Boise, ID.

And instead of my usual strategy of talking about algorithm selection or deployment strategies, I asked everyone in the room the following question:

If machine learning is important enough to you to continue sitting in this room, why aren't you learning it?

I thought for sure it'd be excuses like time...

Or lack of advanced degrees like a Ph.D...

But surprisingly? 75% of them were resistant to learning machine learning because of ethical concerns.

Meaning? They were afraid that learning machine learning would cause more harm than good.

Why Technology Isn't Even Close to Taking Over the World

 Photo by  Franck V.  on  Unsplash

Photo by Franck V. on Unsplash

It's easy to justify that fear, given the AI-assisted robots we see in Terminator movies and the latest Dan Brown novel.

BUT.

The reality is that our smart devices are still...well...pretty dumb.

Let alone capable of taking over human civilization.

In fact, we're at a point in technology where data science and machine learning are simply marketing buzzwords.

How do I know? Because when I ask startups why they hire me, CTOs have said things like...

I want to be able to tell an investor with a straight face that our app is powered by AI technology.

And given that we have Siri and Alexa, THAT is definitely possible.

But if software developers don't have experience working at companies like Apple or Amazon, how are they supposed to know what to do with all of the data they're collecting...

Let alone build technology that supposedly wipes out humankind?

Luckily, ethical machines are built by ethical people

The thing that software developers often forget is that THEY are in charge of the technology.

Not the other way around.

Because as long as there are ethical people, there will be ethical machine learning.

And yet - ethics can get lost in the shuffle when trying to reach new realms of knowledge and understanding about the world we live in.

Just take the famous Milgram experiment of the 1950s, where study participants were instructed to test the knowledge of an unseen man in another room - and administer electric shocks when he got the answer wrong.

The only problem? The unseen man was an actor - and the shocks weren't real.

Unfortunately, the psychology damage to the men administering the shocks was very, very real.

 Photo by  Erik Drost

Photo by Erik Drost

So while Milgram may have answered his curiosity about why so many Germans followed orders of the Nazi regime despite obvious ethical concerns...

It was at the cost of damaging the psyche of men who had opted into the study not knowing what was going on.

Because of experiments like this, the Belmont report was created.

Even though it's nearly 50 years old, I use their 3 guidelines to determine if something is ethical - or not - in my work.

The guidelines are:

  1. Respect for persons: Always protect someone's autonomy.

  2. Beneficence: The benefits must outweigh the risks.

  3. Justice: Costs and benefits must be distributed equally.

There are more up-to-date machine learning ethical standards you can follow, but this is only 3 rules to remember and a reminder to always be ethical in our work - whether it's machine learning or not.

But this can all seem esoteric without a problem applied to it in some way.

So here are 3 case studies to let you test your ethical skills...and see if you measure up.

Scenario #1: "If you know Asians have a higher FICO rating, would you lend differently?"

 Photo by  rawpixel  on  Unsplash

Photo by rawpixel on Unsplash

Thanks to technology, it's easier than ever for lenders to vet out loan applicants via the web...and use machine learning to predict the most optimal rates.

However, lending is a big business in the United States, and companies are grappled with balancing the demand for loans while judging the likelihood of it being paid back over time, making the decision based on 2 questions.

Specifically...

  1. How risky is the borrower?

  2. Given the borrower’s risk, should we lend him/her?

So maybe it's true that Asians have a higher FICO rating over other ethnic groups. And lending differently would be a violation of the Belmont's principle of justice: that costs and benefits must be distributed equally.

But the reality is, those with better FICO ratings - regardless of race - would result in a lower interest rate.

If demographics post-lending show there is a correlation/relation between race/gender and FICO scores, that would just be another data point - but should not be used "directly" to drive lending practices.

~Bernard O.

Lending applications typically don't ask for race information, so maybe Bernard is right in that discovering such would just be another data point...not an ethical dilemma.

Or is it? You be the judge.

Scenario #2: "If you can, should you hold onto as much data as possible?"

 Photo by  Tobias Fischer  on  Unsplash

It sounds farfetched, but it's happened a few times with Apple Watch, Fitbit, and Alexa.

Obviously, all of these devices are designed to remember your preferences and record your history - that's what makes them "smart" devices.

But what happens when the data is being used against you?

...like to convict you for arson and fraud?

Is it a blatant violation of your rights?

Or is it totally valid, given you chose to buy and use the device - and therefore agree to the device's terms of use and privacy policy?

Generally we conclude that its the customer's choice which they exercise by agreeing to our terms of service or end-user license agreement - which they never read; a behavior we expect. [But] the privacy policy is not only for the education of the customer. It must be a deeply embedded way of doing business within the organization. It must be tested against data in the same way that all software functions/features are tested.

~Dewayne V.

I'm a believer that the Belmont Report's point about autonomy plays out - and don't use any of these smart devices myself for this reason.

But I liked what Dewayne suggested about ethics being a fundamental part of how business is conducted when building devices like these.

Scenario #3: If my car runs over a child but saves me, is that right?

 Photo by  Jp Valery  on  Unsplash

Photo by Jp Valery on Unsplash

It's already hard enough to determine fault in a car accident. It seems even harder when a self-driving car is involved.

To be fair, Teslas have yet to be completely autonomous. Someone has to be operating the vehicle, and there's a disclaimer that a driver still needs to be alert.

This is a case we have to determine whether or not the benefit of saving me (the driver) outweighs the risk of hurting the child.

And instead of me or you determining the maximum amount of beneficience, MIT is working on crowdsourcing that information.

But Moein A. believes that most of the time, traffic rules work in our favor. But there are edge cases where fault of the accident is completely dependent on the scenario.

If I am driving according to traffic rules on a road and a child suddenly appears in front of the autonomous car, and there is a valley on side road, the car should save me by continuing driving straight and I unfortunately run over the child. His parents are responsible for the accident. If driving on the streets and a child crossing the zebra line the car should avoid the accident even if it is at the cost of drivers life.

What's interesting about Moein's answer is that it was an adult instead of a child, there'd have to be some decision criteria about whether or not they were standing in a pedestrian zone vs. jay walking to determine fault.

But Moein is only one person with this opinion (so far.)

It remains to be seen if MIT's project can come to an agreement about what constitutes moral behavior when it comes to technology like self-driving cars.

A quick recap

It might feel silly to talk through these 3 scenarios in your next team meeting.

But with technology companies growing at a rapid rate, ethical dilemmas like the one above will only become more prevalent.

As for whether machine learning does more harm than good, I still don't think we're at the point of killer robots yet…

But regardless, it always helps to remember the findings of the Belmont report:

  1. Respect for persons: Always protect someone's autonomy.

  2. Beneficence: The benefits must outweigh the risks.

  3. Justice: Costs and benefits must be distributed equally.

Will these guidelines help you make the right decision? Not necessarily - because ethics are never straightforward.

So I always keep the following in mind: As long as there are ethics-aware engineers, there is always the possibility for ethical machine learning.

And if you eventually decide that machine learning is important to the future of your job, I hope you choose to be one of them.