Could risk quantification tools help develop more security-minded engineers?

I could be an architect, an engineer or a code writer—at most tech companies my career trajectory is only propelled by whether or not the functionality that the business wants exists. It could be Swiss cheese from a security or privacy perspective and I can still become principal engineer fellow or VP as long as it's patentable and the functionality is there. We have to build into those disciplines the fact that if somebody develops stuff with vulnerabilities in it, they need to be held back, performance managed and have their skill deficiencies remediated. Unless you're doing good enough reviews early enough, you might not find out about these deficiencies until later.

Anonymous Author
I could be an architect, an engineer or a code writer—at most tech companies my career trajectory is only propelled by whether or not the functionality that the business wants exists. It could be Swiss cheese from a security or privacy perspective and I can still become principal engineer fellow or VP as long as it's patentable and the functionality is there. We have to build into those disciplines the fact that if somebody develops stuff with vulnerabilities in it, they need to be held back, performance managed and have their skill deficiencies remediated. Unless you're doing good enough reviews early enough, you might not find out about these deficiencies until later.
0 upvotes
Anonymous Author
We can use risk quantification for performance reviews that are truly repeatable with shift left. As you commit your code into a branch, before you cut it I can get a real time report that you committed 3 vulnerable libraries and introduced 40 common weakness enumerations (CWEs), which resulted in top 10 OWASP risks. We can get these reports every time they commit code, and the manager can use that as a performance review. They can aggregate all that and say to their teams, "If you want a true risk quantification, this is how bad we are."
1 upvotes