How can CISOs/CIOs codify a set of ethical principles around AI?

We are building a sense of a normative “should” when it comes to artificial intelligence around the regulatory imperative. So the framing discussion around “should” is attestation to various regulatory frameworks (CCPA and GDPR foremost amongst them as the golden or rising bars for responsible handling of data). One of the challenges in evolving that conversation beyond strong regulatory frameworks is when you're trying to build a critical thought approach to some of the new technologies and skills that are desperately needed, you have to look beyond that “should”. There's a business enabling and a board facing win in looking at the regulatory narrative because it's always going to have a space in highly regulated sectors. But when evolving the conversation around AI, you have to consider the regulatory piece as only a part of it. The second point I would make here is, we’ve all been in those meetings where we're reviewing the latest EDR platform or source solution and there is a claim of AI/ML. It reminds me of detective black box analytics and the push towards that five or six years ago. These pre-packaged solutions promise, “don't worry about what's happening down here, but there's protection.” As a credit to the information security community today, that's no longer good enough. I think there's a need amongst us to drive towards validation of those claims. And then it goes back to the skilling piece. Do we have the right people to ask those questions and are we preparing them for that? That's a difficult thing.

10 views
4 comments
1 upvotes
Related Tags
Anonymous Author
We are building a sense of a normative “should” when it comes to artificial intelligence around the regulatory imperative. So the framing discussion around “should” is attestation to various regulatory frameworks (CCPA and GDPR foremost amongst them as the golden or rising bars for responsible handling of data). One of the challenges in evolving that conversation beyond strong regulatory frameworks is when you're trying to build a critical thought approach to some of the new technologies and skills that are desperately needed, you have to look beyond that “should”. There's a business enabling and a board facing win in looking at the regulatory narrative because it's always going to have a space in highly regulated sectors. But when evolving the conversation around AI, you have to consider the regulatory piece as only a part of it. The second point I would make here is, we’ve all been in those meetings where we're reviewing the latest EDR platform or source solution and there is a claim of AI/ML. It reminds me of detective black box analytics and the push towards that five or six years ago. These pre-packaged solutions promise, “don't worry about what's happening down here, but there's protection.” As a credit to the information security community today, that's no longer good enough. I think there's a need amongst us to drive towards validation of those claims. And then it goes back to the skilling piece. Do we have the right people to ask those questions and are we preparing them for that? That's a difficult thing.
1 upvotes
Anonymous Author
From a cybersecurity and incident response capability perspective, I've always been concerned about the event itself that led to whatever the investigation is. Invariably almost everything that we do in incident response starts with an event of some type, not with a person. And so at the start of it, who cares about the person, because there's an event that started this whole thing, and eventually you're going to get down to, oh, it was this computer and it was this person behind the keyboard who did whatever the action was. And so it’s always event-before-person for me. There is that human component, where even analysts will go, “Geez, that says it was Jeff, but Jeff wouldn't normally do that kind of thing, would he?" Or, "Oh, Jeff's my friend and we'll just sweep this under the rug." So even though an AI component might come up, once it gets to the humans, the picture changes a lot in terms of what the response action is going to be.
1 upvotes
Anonymous Author
The assumption is that us as large companies are all looking at what our principles for AI are. Where I am, we're just looking at ours now, and it's been an eight month endeavor to figure out what our principles are, what we want to advocate for, what we want to do, and we're still working on it. From what I understand, there aren't a lot of companies out there right now that are having the same conversations.
1 upvotes
Anonymous Author
I am not an AI focused practitioner by any means, but I think with my limited experience, something that really occurs to me about AI is—just as is the case with more complex, advanced, inherently challenging information security disciplines—there's a real conversation to have around inequity, because at the very core technical level (is your data structured? Is it unstructured? Do you even know where your data is) there's a wide array of maturity in that space. So I am not shirking the collective responsibility to think about privacy and aspire to the codification of that, but I also look at some of these smaller entities and I think there's an element that can and should be done, but there's also a fundamental problem of skilling and environmental maturation that has to set the floor for ML and AI to be affected atop it.
1 upvotes