Should the ethical issues arising from AI/ML be owned by the CISO?

With machine learning and artificial intelligence I had to write codes of conduct for data scientists, figure out where we're getting the training data, and whether or not it was ethically acquired or, 'just on the internet.'  At Cylance, we were developing a persona product that used mouse movements and keystrokes to figure out who the right person was. At Cymatic, we do that in the browser. But how do you do that without creating a ton of other privacy implications or false inferences? So you have all these slippery slopes with AI and security, particularly when it's used for identification, that there's a whole lot of other inferences that could be made. One of the things that I learned, and it frustrated me with many peers over the past several years, is that a lot of my security peers said that's not their responsibility. It's not their responsibility if there's a bias built into it, indirectly. They’d say “yeah, but it's not a cybersecurity attack.” And I say: build it into your SDLC, build it into your privacy by design, you own it. I look at it from a regular risk perspective, but I also look at these ethical and moral implications because I want to create the principles and compass so that we don't go down the slippery slope, not realizing it and then go, "Oh shit” later.

23 views
3 comments
1 upvotes
Related Tags
Anonymous Author
With machine learning and artificial intelligence I had to write codes of conduct for data scientists, figure out where we're getting the training data, and whether or not it was ethically acquired or, 'just on the internet.'  At Cylance, we were developing a persona product that used mouse movements and keystrokes to figure out who the right person was. At Cymatic, we do that in the browser. But how do you do that without creating a ton of other privacy implications or false inferences? So you have all these slippery slopes with AI and security, particularly when it's used for identification, that there's a whole lot of other inferences that could be made. One of the things that I learned, and it frustrated me with many peers over the past several years, is that a lot of my security peers said that's not their responsibility. It's not their responsibility if there's a bias built into it, indirectly. They’d say “yeah, but it's not a cybersecurity attack.” And I say: build it into your SDLC, build it into your privacy by design, you own it. I look at it from a regular risk perspective, but I also look at these ethical and moral implications because I want to create the principles and compass so that we don't go down the slippery slope, not realizing it and then go, "Oh shit” later.
0 upvotes
Anonymous Author
I think it's a joint effort with the CISO, Engineering, Architecture, and Operations to maintain and keep AI/ML in check when it comes to security. Each area plays a role in ensuring the security of machine learning and artificial intelligence.
1 upvotes
Anonymous Author
I think as practitioners in the profession, there are a couple of things we need to bring to the mix as potential roadblocks for us. One of the things is the professionalization collectively of CSOs across the board because of the skills gap and the jobs gap. We've got a lot of people who are coming in chasing dollars and not fully holistically skilled, and understanding that there's a hard and soft portion to doing this sort of work. Many of them either grew up in the soft portion and don't have the hard portion or grew up in the hard portion and don't care about the soft portion, which is making it difficult for them to truly be advisors and partners. So I think that's one problem or challenge that we're seeing collectively as we look at ethical implications, and that's exacerbated by the second roadblock. In less mature, less senior, and smaller organizations we are still looked at as a business roadblock versus a business enabler. So when you start asking the question, “yes, I know we can, but should we, and what are the principles that will guide us?” we've run into the same ready-fire-aim business focus that went into outsourcing, offshore, wireless, cloud, and anything else over the past 30 years. Those of us who've been around the block are just getting to where there's that level of operation, maturity, and understanding in these harder organizations. So as we look at these ethical issues and inject ourselves into them so that we can properly secure them in our role to protect the company and more importantly, to provide shareholder value, then that's the level of up-skill and focus that I would submit needs to happen in the up-and-coming crop of CSOs.
0 upvotes