Machine Learning

Machine Learning
Innovation:  AI Investment and AdoptionInnovation: AI Investment and Adoption

AI/ML investments are well underway at leading companies and show no signs of slowing. But are these massive investments generating the business results they promise? Your IT peers tell all.

Choose your preferable search engine and mention the reason below:

Top Answer : Elasticsearch: Very good log analytics and visualization with Kibana

Pulse Flash Read

Edge computing: a vendor’s dream in a catastrophizing world?

Edge Computing, as a category, sounds like a bad marketing attempt to create intrigue around yet another ‘paradigm-shifting’ technology that’s ‘the future of the enterprise’. What the name actually refers to is its functionality -- pushing the IT perimeter right up to the edge of where the data is being collected, in the form of smart sensors or Internet-of-Things (IoT) devices. Edge Computing (EC) isn’t just about gathering data, though. Because gathering too much data eventually becomes problematic. Smart sensors mostly collect unusable ‘noise’. Think about all the images a smart camera picks up throughout each day on a busy street. If the cloud has to process hours of nothing but the occasional pigeon wandering into shot, that’s a ton of wasted server space. What EC does is place computing power inside those sensors. This computing is done in isolation, locally, cut off from the rest of the network, after which only the clean, usable data is sent on to the cloud or data center.  While EC solves some big problems in principle, there are a lot of discrete steps and factors to the whole process. Imagine the data as a game of hot potato. Each step of this game of hot potato requires entrenched security -- it’s bad news if someone can simply hack into that data being processed in the sensor (and there are many anecdotal reports of this happening), or indeed corrupt it before it’s passed on to the data center. That means containerization. There’s also latency and power consumption to consider. What’s the point of collecting data out in the world if it takes too long to make it to the cloud and back (a potentially fatal problem in industrial safety settings), or devices die out in remote locations? That means utilizing up-to-date 5G networks, LoRa (Long Range, low energy radio frequencies), BLE (Bluetooth Low-Energy) etc. And indeed, all the computing happening at source needs Machine Learning expertise thrown at it to figure out what’s useful.  This hot potato situation creates complexity as myriad, unproven vendors step in to offer ‘expertise’ and single point solutions to intrigued but overwhelmed IT teams.
Gartner predicts a ‘volatile’ situation over the next few years as quick adoption is met with insufficient solutions. Because of this, Gartner forewarns organizations to have an agile mindset going into any EC projects. However, we live in a world where climate-driven catastrophes and social unrest are becoming the norm. This could spell disaster for all of us tech companies for whom the internet is the lungs inhaling life into our organizations. Large scale outages will become a (bigger) problem based on current infrastructure. However, EC enables a situation where devices don’t depend on constant connection. Data can be stored on-device until a network can be re-established, preserving it during downtime. ECs can also be architected to form Mesh Networks, so that even if a node in that network goes down, the rest of the network can pick up the slack and find alternative pathways for upload. If the cloud forms the organization's brain, ECs can act as miniature, extra, failsafe memory banks. However, given that these sensors are physical objects, they won’t ever be totally safe from harm, either. Just look at how conspiracy theorists have torched 5G towers (or what someone on Facebook thought was a 5G tower, anyway). So physical device safety is another aspect to consider. As part of a movement made up of several different innovative, overlapping fields such as IoT networks, 5G, virtual machines, cloud computing, Kubernetes (to name a few), wide scale adoption might take a while to come to fruition. Perhaps over time more generalized, established, expert vendors will take control of the field. IBM is already pretty well positioned for that. ClearBlade boldly offers a scalable, managed solution for a ‘common software stack across the board’. RedHat offers a ‘Virtualization Manager’ for centralized oversight with a Graphical User Interface and an open source library to refer to, but that leaves IT a lot of work to do, still. For now, EC seems to be living up to its name more by existing on the edges of conversations rather than in IT infrastructure. But as data collection turns increasingly into data deluge, Edge Computing might form the necessary dam that saves the IT ecosystem from disaster.

Are there any vendors you’re excited about in the Edge Computing space? Or is Edge Computing not there yet for you?

Top Answer : AWS

Are there any ethical concerns you have about implementing AI/ML?

Top Answer : I'm in healthcare, and what we're trying to do is develop drugs. Right now it's like a 10 year cycle, so we're trying to find a way to speed that up. There's a lot of ethics involved in AI/ML on what you can access when you're swallowing sensors. What kind of data can you pull? Is it going to be PHI? Is it going to be HIPAA data? Things like that. What kind of data do you want to release to your doctor, if you're all sensitized, because pretty soon we're going to be poking sensors all over our bodies that do a great number of things. So that's where we have to be careful.

Machine Learning and Inference model monitoringMachine Learning and Inference model monitoring

As machine learning becomes more important to businesses, how are they thinking about Inference model monitoring? Is it a priority for them?

0 views
0 comments
Related Tags
When evaluating AI/ML offerings, how do you decide which is best for you and your organization?

Top Answer : Probably one of the more difficult things, I find, is trying to keep up with all the tech, but also be pragmatic in the process. Because it's easy to go out and want to buy all the new cool stuff. But is it really realistic to try and implement and deploy all that, manage all that? That's the microservices conundrum. Someone just posted something recently... “I thought that moving to microservices was gonna make my job easier for everything. And now every time we have an outage, it's like murder mystery.” So it's interesting to kind of see the give and take that comes with that. A lot of offerings are all going to have things that are going to be more or less localized to what you're looking for.

5 views
2 comments
1 upvotes
Related Tags
State of Data OperationsState of Data Operations

The report was conducted as a deep dive into data operation opportunities and challenges for over 150 IT Executives.

What are your top challenges implementing AI/ML?

Top Answer : Using the basic statistical modeling and unsupervised learning doesn't help because cyber security doesn't have a label data concept.  You go to Netflix or Amazon.  They look at users behavior and they’re able to tune the algorithms, and increase accuracy. But in cyber security, there's no label data. The traditional approaches of figuring out signal versus noise did not work, so now we are looking at newer ways of doing things, where you're teaching the machines to think like a self driving car.  Meaning, not just through the unsupervised learning, but looking at how a really skilled human would go figure things out.  So that's where this is headed right now.

8 views
4 comments
1 upvotes
Related Tags
What use cases do AI/ML bring the most value to?

Top Answer : Cybersecurity.  In the 6 years I spent at VISA, I witnessed about $1.2 billion spent in cyber security.  Security is part of the core business model, so when you spend that kind of money, the number of tools that you have churn quite a bit of machine data. And obviously people are proud of retaining those 6 billion events per day, but truly think about it. You have six humans, for eight hour shifts, looking through close to 2-3  billion events in that eight hours, trying to make sense and catch a breach situation before it happens. If there is a breach, game over, right? Integrity is really important. So, we started with basic statistical modeling and things like that with the volume, variety and velocity.  But it was not scalable. So what I've observed is, when you use a traditional rule based systems or statistical modeling, the false positives are way more than the actual signal itself. So practitioners either tune out these rules or ignore the alerts that are coming up. So these 5-6 billion events include at least 300 to 400 million alerts too, in that one day period. Obviously humans cannot look at this and make sense. So that’s where cyber security becomes a really good use case for AI.

8 views
4 comments
1 upvotes
Related Tags