Artificial Intelligence

Artificial Intelligence
What do you see as the biggest obstacle to undertaking AI projects in your company?

Top Answer : Quality of data over sufficient

1323 views
6 comments
4 upvotes
Related Tags
Are chatbots AI (Artificial Intelligence) or ML (Machine Learning)?

Top Answer : I don't like the word chatbot per se. Whether it's machine learning or AI, there are steps that you need to go through to train whichever intelligent assistant or whatever AI model you're working on. You need a number of cases and you have to properly train your model with a human in the middle, because the only way that AI can learn is if you actually identify what is not correct to help the model to ensure accuracy. One of the examples that my seed investment was working on was that the AI must know how to differentiate cucumbers from tomatoes. If a model looks at a cucumber and labels it as a tomato, the human in the middle corrects that mistake.

33 views
10 comments
0 upvotes
Related Tags
Will AI negatively impact the workforce at large?

Top Answer : It's a big debate that starts to get into the philosophical aspects—AI is automation, so will it take away jobs? Manufacturing is a perfect example: Everything in that space is automated now, which took away jobs. But that's okay because you can reskill people.

In light of the story around AI ethics researcher Timnit Gebru, do you believe Google is committed to transparent and ethical AI practices?

Top Answer : I think google by nature always has some nefarious intent with anything they do.  It is all centered around making money and using data no matter how it is acquired as a means to do so.

243 views
1 comments
1 upvotes
Related Tags
Innovation:  AI Investment and AdoptionInnovation: AI Investment and Adoption

AI/ML investments are well underway at leading companies and show no signs of slowing. But are these massive investments generating the business results they promise? Your IT peers tell all.

AI startupsAI startups

This report was created to for IT Executives interested in surveying which startups their peers find interesting and have worked with.

0 views
0 comments
3 upvotes
Related Tags
Is open source a cybersecurity minefield for AI?

Top Answer : The open source tools are great but you have to ask, how have they deployed it? Where are they getting the data and where are they hosting it? A few years back I did some work for a company in the AI space which was basically an amalgamation of open source tools hosted in your own dedicated Amazon VPC to be able to run whatever you wanted. In that case it was pretty straightforward but in many instances of AI—especially with AI providers—that data is probably coming from your networks or systems and it’s going somewhere else, probably to a cloud service that will have the compute power. If they are actually doing ML, you need some good compute power and the cloud services are great for that.  But chances are it's not one cloud: they've got some compute power over here doing this analysis, then they're sending the data over to a Tableau instance, which is sitting in another area. Having the tools is great because it gives researchers the ability to do a lot of things and to improve the tools. But it's a question of where they’re deployed: How can you get your arms around where the data is residing as well as the data about the data? Machine learning is taking data, doing all this analysis, and creating more data, so where's that data sitting?

Do you believe it when cyber security solutions claim they’re using AI?

Top Answer : There are a number of cyber security vendors that claim they have AI embedded in their tools—CrowdStrike comes to mind, among others. They have some automation but it’s not nearly what we would define as AI.

62 views
16 comments
0 upvotes
Related Tags
Do you think government regulation will stifle the evolution of AI technology?

Top Answer : It's fascinating when you look at how the AI space is evolving; there are a lot of positives. But the government is really talking about regulation on the ethical side, which scares the heck out of me. You don't want to tamper down the innovation, but it will take us as IT leaders to put some guardrails around it. It's that ethical side of AI as an industry that we have to watch out for. Explainable AI is a newly emerging field that's pushing people to rethink how they'll use the data at every step before they actually get the results. It's a new concept started at MIT that Microsoft and Google have been investing in heavily. The Defense Advanced Research Projects Agency (DARPA) even has an open-source challenge on explainable AI. Their goal is to make sure that any federal system that uses AI or ML understands exactly what’s involved: What is my input? What happens to my intermediate rates? What happens to the bias? How is the system tracking with the data points and what is it putting out?

Should technology leaders trust AI, presuming the technology is fully realized?

Top Answer : What will be interesting is that we won't trust these things until we have the data to trust them. If Tesla had all the data to say that 87% of the time that we witnessed a driver be presented with a pedestrian to hit or a wall to run into, they chose to hit the pedestrian. Would you feel better about that or not? Because at some point, Tesla is going to have that data, because this circumstance will happen enough times. And it's not just Tesla. Across the industry, you'll have the data because you'll know how people have reacted. You'll understand what human behavior has been, but I don't know if that'll make me feel better or not. I think that's going to be interesting. I definitely will look forward to the day that we can do that.

Does machine learning (ML) use artificial intelligence (AI) or does AI use ML?

Top Answer : I see ML as something that's actually looking at my dataset and my environment, understanding what's going on and then surfacing that data in a way that is helpful—maybe in a way I didn't know I needed that data. The AI is the conversational part that is acting on my behalf, executing whatever I want it to execute. That's where I make that separation. I'm much more excited about the ML data side of things than I am about AI at this point.

Do you have any major concerns about the current state of AI?

Top Answer : When will you trust AI to actually do things? I put a ton of smart devices in my house and I was going to put a smart lock in, but then someone told me, "Anyone could be outside your house and yell, ‘Alexa unlock the front door,’ and it's just going to open." I was like, "Well, hold on a minute then." Mess with my lights, maybe, but don't unlock my door from the outside. I think I'm looking forward to getting there, but definitely, there has to be a lot of trust before we start doing that.

11 views
4 comments
0 upvotes
Related Tags