Let's have a discussion on the ethics challenges of AI?

Artificial Intelligence, Emerging Tech - Greg this is a vast topic, it comes down to how you train your model, as training is done by human and if the human training favors the company versus customer , then definitely that’s a one sided perspective, if however customer satisfaction is added to the criteria that will be better.

9 comments

https://www.pulse.qa

Pulse User

Greg this is a vast topic, it comes down to how you train your model, as training is done by human and if the human training favors the company versus customer , then definitely that’s a one sided perspective, if however customer satisfaction is added to the criteria that will be better.

Pulse User

Lots of issues, though many are just common sense.   Challenges: - Unrealistic Expectations: AI is nothing more than advanced statistics (in some cases not even that).  Not to minimize the power of the technology, but there is nothing "intelligent" about it.  As such, non-technical people get way ahead of themselves in their belief about what the technology can actually do.   There is a great article on this point on The Information (https://www.theinformation.com/articles/the-wrinkle-in-forecasting-autonomous-cars) - key point - most technology follows an S-Curve not a J-curve, yet it is easy to mistake the two. - Using it in the right way - AI can easily make predictions that can outperform humans in many cases.  The problem is that the predictions can be wrong some of the time, and often in unpredictable ways (statistical error).  When the cost of the AI being wrong is higher than the benefit of being right, it isn't a good fit.  The converse is generally true - when the cost of being right outweighs the cost of being wrong, this is where AI can be really valuable.   To illustrate the above, take the following example - IT Ticket Routing:   We implemented this at Facebook 5 years ago.  When we did, it was 92% accurate.  However, the cost of the ticket being routed to the wrong team was far more than the benefit of the humans it replaced.   Today, MoveWorks has perfected this and built a UX that quickly escalates the ticket to a human - alleviating the above issue and making it a viable use of AI. - Human interaction: The above issue highlights one of the most imporant issues - where does the AI stop and where do the Humans begin.  Tesla and Google have very different perspectives on this for cars.  Tesla makes it super easy to allow for a human to take over the car at any time.  As such their AI is only suitable for a product where the human is still the driver and responsible party.  Google's attempt is to eliminate the human altogether.  There is much more to discuss on this point Ethics:  - AI amplifies existing bias in data - Because AI learns from data based on our own behaviors, issues of bias will be amplified by AI.  A recruiting system built to predict whether a candidate will pass an interview, can easily reinforce gender or ethnic biases that already exist.  These issues need to be tested for and controlled for.   - AI is no substitute for human judgement.  Data has no sole, no concept of right/wrong.  The more AI is used to make decisions that can affect humans (e.g. how to best allocate health care resources), the greater the risk that it will make decisions that are cold and inhuman.  This will increasingly become relevant as to date most AI isn't smart enough to be put in this situation. 

Pulse User

Forgive the lack of formatting above.  It all got stripped by Pulse. Mayank, you guys should consider some markdown capability. 

Pulse User

oh! Yes you're right - looks better.  

Pulse User

There's a typo in my response I can't edit - Should read: "Most technology follows an S-Curve not a J-Curve - yet it is easy to mistake the two"

Pulse User

I would agree with Timothy Campos' assessment.  The AI we have now comes down to a series of algorithms and the ethics are tied to how we interpret and use the data.  The biggest challenge from my perspective, is using data in a way that is beneficial to the greatest number of people even if that means it may negatively impact our ideas, sense of morals, or profit. 

Pulse User

AI is a very powerful tool/process that must be used in conjunction with the human intelligence to be effective in obtaining the desired results. The ethical component is applicable to the human who uses AI to achieve an objective.

Pulse User

Claude Shannon once said whatever line is set for us, as soon as we achieve it, the rest of the world will say that's not really intelligence. You can't keep shifting the line. Will AI continue to get more and more intelligent? It certainly will but it doesn't have to be a contest. It doesn't have to be a competition. The work I do focuses on what I refer to as symbiotic tech solutions. How to have humans and machines work integrally. We are the species that augments from eyeglasses to iPads, from shoes to cell phones. We are born, you know, naked, alone and afraid. It's only through technology that we are able to achieve what we do as humanity. And so this is nothing new for us. This is just the next logical step.

Pulse User

Another way to look at this is via the way your AI solution can be attacked to produce incorrect results.  We've all seen the demos of the image classification algorithms that mistake turtles for guns, or the t-shirts that your Tesla thinks is a stop sign.  So if those types of "AI" solutions can be attacked, what hidden biases exist in your solution?  I think the work that's being done on "explainable AI" is pretty important.  If we're going to be reliant on systems that are built using these technologies it's important that we (or at least the designers) have an understanding of what is actually occurring.  Otherwise, we're just building systems that look like that old whiteboard drawing with the box that says "Magic Happens"