Enterprises across many industries are adopting artificial intelligence (AI) and machine learning (ML) at a rapid pace. Many factors fuel this accelerated adoption, including a need to realize value out of the massive amounts of data generated by multichannel customer interactions and the increasing stores of data from all facets of an enterprise’s operations. This growth prompts a question: what knowledge and skill sets are needed to help organizations leverage and scale AI and ML?
To answer this question, it’s important to understand what types of transformations enterprises are going through as they aim to make better use of their data.
Many large organizations have moved beyond pilot or sample AI/ML use cases within a single team to figuring out how to solidify their data science projects and scale them to other areas of the business. As data changes or gets updated, organizations need ways to continually optimize the outcomes from their ML models.
Mainstreaming Data Science
Data science has moved into the mainstream of many organizations. People working in various line-of-business teams — such as product, marketing and supply chain — are eager to apply predictive analytics. With this growth, decentralized data science teams are popping up all over a single enterprise. But many people looking to apply predictive techniques have limited training in data science or limited knowledge of the infrastructure fundamentals for production-scale AI/ML. Additionally, enterprises are faced with a proliferation of ad hoc technologies, tools and processes.
Increasing Complexity of Data
Having achieved some early wins, often with structured or tabular data use cases, organizations are eager to derive value out of the massive amounts of unstructured data, including from language, vision, natural language and other categories. One role that organizations are increasingly turning to is the ML engineer.
I have observed that as organizations mature in their AI/ML practices, they expand from hiring mainly data scientists toward hiring people with ML engineering skills. A review of hundreds of ML engineer job postings sheds light on why this role is one way to meet the transformative needs of the enterprise. Examining the frequency of certain terms in the free text of the job postings surfaces several themes:
SOFTWARE ENGINEERING
ML engineers are closely affiliated with the software engineering function. Organizations hiring ML engineers have typically achieved some wins in their initial AI/ML pilots and they are moving up the ML adoption curve from implementing ML use cases to scaling, operationalizing and optimizing ML in their organizations. Many job postings emphasize the software engineering aspects of ML over the pure data science skills. ML engineers need to apply software engineering practices and write performant production-quality code.
DATA
Enterprises are looking for people with the ability to create pipelines or reusable processes for various aspects of ML workflows. This involves both collaborating with data engineers (another in-demand role) and creating the infrastructure for robust data practices throughout the end-to-end ML process. In other words, ML engineers create processes and partnerships to help with cleaning, labeling and working with large scale data from across the enterprise.
PRODUCTION
Many employers look for ML engineers who have experience with the end-to-end ML process, especially taking ML models to production. ML engineers work with data scientists to productionize their work, building pipelines for continuous training, automated validation and version control of the model.
SYSTEMS
Many ML engineers are hired to help organizations put the architecture, systems and best-practices in place to take AI/ML models to production. ML engineers deploy ML models to production either on cloud environments or on-premise infrastructure. The emphasis on systems and best practices helps to drive consistency as people with limited data science or infrastructure fundamentals learn to derive value from predictive analytics. This focus on systematizing AI/ML is also a critical prerequisite for developing an AI/ML governance strategy.
This qualitative analysis of ML Engineering jobs is not based on an assessment of a specific job posting or even one specific to the enterprise I work in. Rather, it reflects a qualitative evaluation of general themes across the spectrum of publicly available job postings for ML engineers—a critical role for enterprises to scale AI/ML.
Within enterprises, ML engineers reside in a variety of teams, including data science, software engineering, research and development, product groups, process/operations and other business units.
While demand for ML engineers is at an all-time high, there are several industries that are at the forefront of hiring these roles. The industries with the highest demand for ML engineers include computers and software, finance and banking and professional services.
As AI and ML continue to grow and mature as a practice in enterprises, ML engineers play a pivotal role in helping to scale AI/ML usage and outcomes. ML engineers enable data scientists to focus on what they do best by establishing infrastructure, processes and best practices to realize business value from AI/ML models in production. This is especially the case as data volumes and complexity grows.
Google Cloud Skills Boost offers a number of courses that can help your teams build ML engineering skills on their path to achieving the Professional Machine Learning Engineer certification. To learn more about how Google Cloud products and services empower enterprises to do more with AI and ML, visit our AI and ML products page or read this blog post about some of our top resources for getting started with Google Cloud services like Vertex AI, our machine learning platform built for the needs of ML engineers.
For the latest from Google Cloud ML experts and customers, check out on-demand sessions from our Applied ML Summit to get a firsthand look at additional learning events for you and your teams.
Understanding what's happening behind large language models (LLMs) is essential in today's machine learning landscape.
AI accelerationists have won as a consequence of the election, potentially sidelining those advocating for…
L'Oréal's first professional hair dryer combines infrared light, wind, and heat to drastically reduce your…
TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…
Whether a company begins with a proof-of-concept or live deployment, they should start small, test…
Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…