In the past, black box third party vendors have left security in the dark. How can security personnel ensure that vendors are honest and the AI/ML they are selling is legitimate and ethical?

Be aware of it and encourage a healthy dialogue around what's put on the table. I think  the tangible outcome is if we're communicating frankly to the vendors we're interfacing with that we expect more and would like transparency. I think there's an evolutionary angle and I truly believe there's a tangible good to doing that.

10 views
2 comments
1 upvotes
Related Tags
Anonymous Author
Be aware of it and encourage a healthy dialogue around what's put on the table. I think  the tangible outcome is if we're communicating frankly to the vendors we're interfacing with that we expect more and would like transparency. I think there's an evolutionary angle and I truly believe there's a tangible good to doing that.
0 upvotes
Anonymous Author
We all have third-party risk assessments. We all buy security tools. One way to maybe tease out the fake versus real with that is to ask them about their AI principles, ask them about code of conduct for their data scientist… match that to their third party risk responses. And if they go “huh?”, then you know, they actually are either on a lot of ethical slippery slopes with what they're doing, or they're actually not doing AI. Because if they don't have these things for the data scientists, they can't show you their principles, they're not doing it, or they're doing it poorly.
0 upvotes