Should Artificial Intelligence be tested by other Artificial Intelligence?

You need human bodies asking questions, like you would with implicit/explicit bias or anything else. Until you can get every major data of brain synapses possible to put against every question, computers aren't going to get there. With quantum computers and maybe 50 years of that analysis maybe it’s possible, but for the next 50-100 years, unless there's a major investment in quantum computing or something like it, and that synaptic analysis is done by those computers in massive amounts, you're just not going to be able to reproduce the diverse questions that people will ask.

14 views
2 comments
1 upvotes
Related Tags
Anonymous Author
You need human bodies asking questions, like you would with implicit/explicit bias or anything else. Until you can get every major data of brain synapses possible to put against every question, computers aren't going to get there. With quantum computers and maybe 50 years of that analysis maybe it’s possible, but for the next 50-100 years, unless there's a major investment in quantum computing or something like it, and that synaptic analysis is done by those computers in massive amounts, you're just not going to be able to reproduce the diverse questions that people will ask.
2 upvotes
Anonymous Author
Currently the Twitter algorithm is bad at dealing with black people in photographs. It chops them off or otherwise fails to recognize them. Is the problem the code or is it the data? The algorithm is being trained on bad data, or mostly white people, and changes need to be made. I imagine the Twitter QA people were all white or the data used was all white and so it all looked fine during testing.  The problem is that when you put the algorithm into real use, it's not fine. There are multiple challenges that come down to specification of use cases. We have to be more agile in terms of our response—we have to be less anxious to say our technology meets all its required use or test cases and more keen to learn by experience and quickly change. There is now black data and code, and the big change for us as IT people is understanding that there is implicit bias in training datasets independently, because you can have a great algorithm, but if your data is bad, you're in trouble. Walking this boundary of ethical composition of data training data and training appropriately is really important.
2 upvotes