THANK YOU FOR SUBSCRIBING
Leading and Getting Started with AI
By Fredrik Backner, Telia Company
• Securing and managing expectations and sponsorship.
• Establishing a Value Realization Office to ensure adoption and “doing the right things”.
• Getting your data in shape. Lots of it.
• Leveraging open source frameworks and cloud services for a quick-start on AI.
• Securing the needed tech roles and team for developing deep learning.
If you are already leading an analytics organization you probably already have secured some of the areas mentioned here, and have a good starting point, while there are some additional important factors when expanding into deep learning and AI. At any rate, starting is more important than trying to get everything right from the beginning.
A Short Primer on Why Deep Learning Matters
AI has been around for more than 50 years, having gone in and out of the “AI Winter” and becoming main-stream and reaching hype just a few years ago. Largely driven by the exponential growth the past decade in affordable and high performance computing power, the explosion of available data from devices, sensors and applications, and an influx of open-source machine learning frame-works.
Development of a deep learning algorithm is fundamentally different from traditional analytics modelling in that an underlying so called deep neural network autonomously and continually configures and optimize itself as it is exposed to new and more data, and depending on if the resulting output is correct or not. This is commonly done supervised by humans, but increasingly also through reinforcement and unsupervised learning.
An important characteristic of deep learning is that its performance and precision typically continues to improve as it is exposed to and trained with more and more data, and identifies the relevant data (so called feature engineering) without human supervision. This is not the case with traditional machine learning and predictive analytics models where performance typically flattens out at a certain point even if more data is fed to it, and where the algorithm need to be configured, optimized and developed by a data scientist, requiring significant time.
Deep Learning Is Not a Silver-Bullet
While forward leaning companies already have a deliberate focus on machine/deep learning and AI, they fully understand that it is not a silver-bullet, where “traditional” machine learning and analytics approaches and algorithms sometimes delivers the required objectives. Therefore it is important to balance and understand when to use deep learning and when to stick to already proven and effective algorithms in the analytics tool box.
A notable challenge with deep learning algorithms compared to most “traditional” machine learning models is that the inner working of deep learning is difficult or impossible to interpret by humans. Effectively making it a black box as to why certain decisions and classifications are made, and whether the algorithm is biased. This is one factor underpinning the emerging and relevant concern on ethics and control surrounding AI.
Secure Right Expectations, Sponsorship and Adoption
From my own experience the “straight forward” part (relatively speaking..) is to build the technology, capabilities, collect the data and develop analytics applications and use cases. The principal challenge is to get it operationalized, integrated and actually used and adopted within the different lines of business. Often business managers are intrigued and interested in analytics, but struggle with prioritizing taking time from short-term targets to incorporate the needed analytics change management, training and adoption of analytics into business processes.
A critical success factor—alongside securing right skills, teams and way of workings mentioned below— is therefore to secure the highest possible (CEO) sponsorship and buy-in, ensuring that clear incentives, strategy and targets are in place for adopting analytics, machine learning and AI, and making analytics a strategic focus area.
Training of neural networks requires vast amount of data, which is why successful enterprises start with collecting and organizing their data in a data lake
Once sponsorship is secured it is important to set the right expectations, and above all to quickly start to deliver to create a pull and interest from business. Start small with understandable use-cases and where the value impact can easily be consumed, measured and understood by your business. Choose wisely, and stay laser-focused.
Establish A ‘Value Realization Office’. Do The Right Things.
It is highly recommended to establish a “Value Realization Office”, responsible for managing and prioritizing the backlog of use cases, quantifying and qualifying use cases and bets across the company but also focusing on stakeholder management, change management, adoption, training and evangelizing analytics. Such unit should sit very close to the analytics, data scientists and data engineering teams, staffed with profiles such as business analysts, business architects and “management consultants”, proficient in talking to both business and engineers.
Setting up cross-functional “tribes” with employees and stakeholders from many parts of the business and organization is an effective way to increase collaboration, buy-in, competence and aligned targets, that otherwise risk to deviate if scattered throughout an organization.
A value realization office can also be used for coordinating, inventorying, knowledge sharing and optimizing the use of funding and resources of different AI initiatives that often takes place across different business units in a larger company. This unit is also a natural place for privacy competence and liaising with privacy officers to ensure GDPR compliance for the data and use cases being developed.
Get Your Data in Place
Training of neural networks requires vast amount of data, which is why successful enterprises start with collecting and organizing their data in a data lake or similar modern and agile data management and data preparation layer. This is not a quick fix, which is why it is highly recommended to start focusing on the data (and quality) from day one. With cloud based data lakes and services becoming mature and comprehensive compared to only a few years ago, the fastest route to make your data available is often through using one (or a combination) of the major cloud service providers, where also machine and deep learning services are readily available.
Cloud does not suit everyone, for privacy, security or regulatory reasons, and in such case you should quickly get best friends with your infra and data center colleagues to get on the (not always so fast) journey of building your on-premise data lake.
Even if you opt for using cloud services you will need a solid team of data engineers and DevOps skills to build, deploy, configure, and monitor the data pipelines needed. It is highly recommended to keep the data pipelining team in the same unit or very near the analytics and machine learning teams to ensure a multi-disciplinary and cross functional set-up, and to minimize waste, lead-time and diverging priorities that often arise when IT and development is separated from analytics.
Kick-Start Your AI by Using Open Source and Cloud Services
Once the proper and high quality data is in place, the development and training of the chosen machine/deep learning algorithm can start. In this area tremendous progress have taken place in just the past couple of years. Cloud players like Google, AWS and others have developed deep learning services, API:s and specialized hardware where much of the inner workings of deep learning is automated and abstracted through frameworks such as Tensorflow and Keras.
Even higher abstraction frameworks are emerging such as AutoML and AutoKeras from companies like Google as well as the open source community. But also from companies such as Knime, Dataiku, Peltarion and other emerging machine/ deep learning and data science tool providers building upon available open source frameworks.
These frameworks make deep learning available to a broader audience, and particularly decrease time to market, up-front investments and the need for super-experts in the inner workings of deep learning. You will surely continue needing heavy-duty data engineers and data scientists affluent in e.g. Python, since coding is still very much required to put together a deep learning project.
Recruit Machine Learning Engineers
As more and more frameworks and methods for deep learning emerge and matures, it will also in part introduce a new way of developing software and applications. Building a deep neural network is largely about identifying a proper neural network architecture, curating and feeding data to it, and then let the deep learning algorithm and framework do the job. This is a visible shift from how data scientists (and software & data engineers) work today, and is sometimes referred to as Software 2.0.
Machine and Deep Learning is in many ways introducing a new role—the Machine Learning Engineer. Effectively, this is a role bringing together today’s data scientists and data engineers, focusing more on preparing data, train, test, and deploying Deep Learning applications. Consequently, securing a lead ML engineer and lead ML data scientist are the most critical roles to get started with deep learning. Besides recruiting, it is highly recommended to train your existing data scientists and engineers to acquire and learn new deep learning skills, thereby leveraging existing in-house knowledge of data, business needs and process deficiencies.
Starting Is More Important Than Getting Everything Right From the Beginning
Don’t get overwhelmed with everything that ideally should be in place—starting to execute is by far more important than doing it perfectly, to keep ahead of competition and build competence, momentum, capabilities and know-how of AI and deep learning. You will have time to—and must-continually and constantly adjust as you and your organization jump into the world of AI and deep learning.
Bringing People, Process and Platform Together to Fuel Your Business Strategy
Kate Terrell, SVP & Chro, Driscoll’s
RPA 2.0-Top 5 Contributors for Accelerating the Transformation of Large Enterprises
Jani Rahja, Head of Intelligent Automation, Posti Group