How should IT teams look to approach AI and ML? Where should we look to start in terms of platforms/applications to invest in first? How should we measure success? What are good examples here to learn from?

Autodesk IT strives to have our services “always on” and “always ready” with “zero vulnerabilities”. We think the best opportunity for machine learning in 2018 is to predict failure before they happen (“predictive failure”), and to identify security breaches in enterprise application user logins. As such, we began two complementary machine learning tracks in 2017 that we will expand upon and mature in 2018:
  1. Predictive Failure: Autodesk collects a vast amount of log data from devices, environments, monitoring systems, and applications. These logs are normally used in post-event research (find out why something failed) or in an alerting mechanism, where an error message in a log triggers the creation of a ticket which engages the support team to take action. This track applies machine learning to predict failures before they happen based on patterns in the logs. In the first phase of this track, we used Kibana's machine learning application to run single-metric and multi-metric unsupervised learning jobs in order to detect any anomalies in the connections being made to the wireless controllers throughout Autodesk. We used Watcher (an Elasticsearch plugin) to send email notifications when there is an inconsistent amount of Wi-Fi connections being made (specifically more than expected in a single day) so that action can be taken as a degradation of service is likely. In 2018, we’ll expand upon this concept to predict failure for systems and applications.
  2. Identify security breaches in enterprise application user logins: The second track uses machine learning to identify security breaches in user logins. Specifically using machine learning to classify if a given query of login details of a random user is usual, given the details provided in the query. The first thing we’ve done to accomplish this is to identify possible patterns in the users according to their department and their usage of applications, and then to group them together so that we can get insight into the ways in which different users are similar and dissimilar. The machine learning approach adopted accomplish this was “expectation maximization”. It is an unsupervised learning method to group the users into clusters based on the amount of time a user spends on various enterprise applications. (The Mean Shift unsupervised learning algorithm from the scikit-learn Python module was leveraged to accomplish this). The results show that the application typically used on a daily basis per user is indeed a viable metric to evaluate the suitability of a user to a particular department in Autodesk. In other words, this information can be used to determine is a login to an enterprise application is unusual for a user and the user’s determined cluster.


    1. We're measuring success by asking the questions "Have we learned something that we didn’t know before?" and "Have these learning had an impact on the availability and/or security of our enterprise application landscape?"

0 answers

#ai,#machine learning,#platforms,#applications,#investments,#metrics,#ml,#artificial intelligence @IT
Prakash Kota

Prakash Kota, CIO

Autodesk IT strives to have our services “always on” and “always ready” with “zero vulnerabilities”. We think the best opportunity for machine learning in 2018 is to predict failure before they happen (“predictive failure”), and to identify security breaches in enterprise application user logins. As such, we began two complementary machine learning tracks in 2017 that we will expand upon and mature in 2018:

  1. Predictive Failure: Autodesk collects a vast amount of log data from devices, environments, monitoring systems, and applications. These logs are normally used in post-event research (find out why something failed) or in an alerting mechanism, where an error message in a log triggers the creation of a ticket which engages the support team to take action. This track applies machine learning to predict failures before they happen based on patterns in the logs. In the first phase of this track, we used Kibana's machine learning application to run single-metric and multi-metric unsupervised learning jobs in order to detect any anomalies in the connections being made to the wireless controllers throughout Autodesk. We used Watcher (an Elasticsearch plugin) to send email notifications when there is an inconsistent amount of Wi-Fi connections being made (specifically more than expected in a single day) so that action can be taken as a degradation of service is likely. In 2018, we’ll expand upon this concept to predict failure for systems and applications.
  2. Identify security breaches in enterprise application user logins: The second track uses machine learning to identify security breaches in user logins. Specifically using machine learning to classify if a given query of login details of a random user is usual, given the details provided in the query. The first thing we’ve done to accomplish this is to identify possible patterns in the users according to their department and their usage of applications, and then to group them together so that we can get insight into the ways in which different users are similar and dissimilar. The machine learning approach adopted accomplish this was “expectation maximization”. It is an unsupervised learning method to group the users into clusters based on the amount of time a user spends on various enterprise applications. (The Mean Shift unsupervised learning algorithm from the scikit-learn Python module was leveraged to accomplish this). The results show that the application typically used on a daily basis per user is indeed a viable metric to evaluate the suitability of a user to a particular department in Autodesk. In other words, this information can be used to determine is a login to an enterprise application is unusual for a user and the user’s determined cluster.


    1. We're measuring success by asking the questions "Have we learned something that we didn’t know before?" and "Have these learning had an impact on the availability and/or security of our enterprise application landscape?"

Arvind Radhakrishnen

Arvind Radhakrishnen, VP of Product Management

First it is imperative to understand what specific business use cases you have for machine learning and AI. To identify and evaluate, it is better to work closely with business and operations to identify areas that require either human judgement or cognitive capabilities. Alternate scenarios could be of to understand how to improve customer experience. The best way to Approach this is to build customer journey maps and identify pain points across the channels as customers interact with your organization. This gives a holistic view of what set of opportunities you have. Then decide how many of These can be solved using machine learning / artificial intelligence type solution. Platform choice comes next that best fits the business use cases identified., which actually relatively straight forward if you intend to leverage vendor products. Goat RFP and ask The vendors to solve for the business case, determine the technology adaptability and cost/economics and make the decision appropriately.