What is CI/CD in machine learning?

Combined, CI/CD stands for Continuous Integration and Continuous Delivery (CD). CI/CD is one of the best practices in DevOps. It engulfs a methodology used for developing software that allows developers to release updates more frequently in a reliable, sustainable way. Although it’s not uncommon to see CI/CD mentioned together, the best way to grasp each […]
Oct 13th 2021
read

Share this post

Oct 13th 2021
read

Share this post

What is CI/CD in machine learning?

Kurtis Pykes

Machine Learning Engineer

Combined, CI/CD stands for Continuous Integration and Continuous Delivery (CD). CI/CD is one of the best practices in DevOps. It engulfs a methodology used for developing software that allows developers to release updates more frequently in a reliable, sustainable way. Although it’s not uncommon to see CI/CD mentioned together, the best way to grasp each method is to address them in isolation.

Continuous Integration

Continuous Integration (CI) is a development practice and coding philosophy [of some sort] that requires developers to integrate code many times throughout the day. When performed successfully – meaning integrations are regularly performed – the changes made to the code of an application will be periodically built, tested, and merged to a shared repository.

Therefore, CI aims to establish a consistent and automated way to build, package, and test applications. By establishing consistency during the integration process, teams are slightly more incentivized to commit code changes more often, which leads to better collaboration and software quality.

Continuous Delivery

Continuous Delivery (CD) carries on from where Continuous Integration (CI) ends. Continuous Delivery is a method to automatically deliver applications to selected infrastructure environments, meaning that all changes to code (i.e., new features, configuration, bug fixes, etc.) will be taken into production quickly, reliably, and sustainably.

Generally, teams work in multiple environments other than production, such as the testing environment and the development environment. Hence, Continuous Delivery secures an automated path to push changes to them. To achieve Continuous Delivery, the code must constantly be in a state that is ready to be deployed, even though many developers will be consistently making changes daily.

Continuous Deployment

The acronym “CI” in CI/CD always references Continuous Integration. In contrast, the term acronym “CD” could either reference Continuous Delivery or Continuous Deployment. Although the terms are sometimes used interchangeably since both are concerned with the automation of the latter stages of the pipeline, they can also be used separately to illustrate the extent of automation.

Take a scenario in which a developer has completed a bug fix and has pushed the updated code to a shared repository. Continuous Delivery infers that the code is now in an environment where the operations team can deploy it to a live production environment. This reduces the effort to deploy the code since a major obstacle (i.e., the communication and visibility between the development team and the business team) has been overcome.

On the other hand, Continuous Deployment picks up where Continuous Delivery ends in the sense that Continuous Deployment infers that the developers’ changes would be automatically released from the repository to the live production environment. This also reduces the effort to deploy code since operations teams would not be overloaded with manual processes, which can slow down delivery.

word image 144

The Process of CI/CD
The Process of CI/CD

The Process of CI/CD; Photo by Joseph Matthias Goh on Medium

Although it’s not something to completely overwhelm oneself with, it is possible for CI/CD to simply specify the connected practices of Continuous Integration and Continuous Delivery. It can also define the related methods of Continuous Integration, Continuous Delivery, and Continuous Deployment.

Benefits of CI/CD in Machine Learning

We know that continuous integration (CI) allows developers to integrate code continuously into a shared workspace, and Continuous Delivery (CD) enables the operations team to deliver the updated code into production continuously, but why is this important? What are the benefits of CI/CD?

More Minor changes to the code

Since CI encourages developers to commit their code into a version control repository on a more frequent basis, commit cycles are typically much smaller, thereby significantly reducing the possibility that multiple developers would be editing the same code simultaneously. To further add, it’s much easier to identify any defects or feature engineering and model quality issues on smaller code differentials than on larger ones developed over more extended periods.

Faster release rate

Since failures are detected much faster, new features can be added and failures can be fixed much quicker, leading to increased release rates. However, it’s essential to recall that realizing continuous delivery demands the code to constantly be in a state that is ready to be deployed. For this to work effectively, the entire system must be moving continuously. Consequently, Continuous Integration is a prerequisite for CD.

Reduced costs

CI/CD takes away much of the complexity of deploying machine learning models. There are many repetitive steps involved with deploying machine learning models (i.e., Data Scientists will conduct several experiments to find a champion model). By automating the CI/CD pipeline, teams significantly reduce the number of human errors. In conjunction, developers have more time to focus on developing the machine learning pipeline since mistakes can be detected much sooner, meaning they can be handled quicker. In addition, there aren’t as many code changes to repair as time progresses.

Better teamwork & better products

CI/CD serves as a great way to gain continuous feedback from customers and teammates, leading to better machine learning applications, also increasing transparency and accountability among team members. The Continuous Integration side of things is more concerned with build failures, merge problems, etc., thereby providing the development team with essential information while working on a product. On the other hand, CD is more concerned with getting the product in the hands of the end-users as fast as possible, which permits customers to provide feedback.

Happier users

Teams develop products for users. Users are entitled to express their opinions which should be taken into consideration by the product team and the business. CI/CD makes it easy to do faster releases that include newer features and bug fixes, keeping customers happy as their feedback is incorporated more quickly. This also makes it easier for developers to keep their products up to date with the latest technologies.

Managing Complexity

The points above mostly cover the general benefits of using CI/CD. When building a Machine Learning Pipeline, applying CI/CD practices is mostly related to scaling teams and managing complexity. In practice, small-scale projects (i.e. instances where not many models are being built) can manage without leveraging CI/CD. However, the majority of organizations utilizing machine learning typically require greater complexity and team size, hence CI/CD is generally regarded as part of MLOps (Machine Learning Operations) best practices.

Building machine learning models at the enterprise level can be extremely difficult to manage. A team is usually composed of multiple Data Scientists, each conducting their experiments. Therefore, numerous experiments can be running at any one time. Without a robust framework in place, tracking these experiments is challenging. This is where CI/CD comes to life in machine learning projects.

In production, machine learning models tend to suffer from “drift”. This is when patterns the model learned during training no longer hold – this is where CI/CD is effective. The CI/CD practices work hand-in-hand with a robust framework to permit continuous improvement and ensure that models are constantly performing effectively whilst in production.

CI/CD pipelines

When a business determines that they want to improve applications reliably and continuously, they build CI/CD pipelines. The pipelines introduce monitoring and automation to software development and during the operations process (i.e., delivery and deployment). Once the pipeline is up and running, the focus of teams may be shifted solely to enhancing software since the manual burdens have been lifted from their shoulders.

CI/CD pipelines are constructed of a series of steps that may be broken down into a distinct subset of tasks, grouped into the “Pipeline Stage”. A typical pipeline stage includes:

  • Build – The software is compiled
  • Test – Automated tests are run on the code
  • Release – The application is uploaded to a shared repository
  • Deploy – The deployment of the stage into a production environment
  • Validation – The organization will determine the steps to validate a build

Business requirements vary from product to product; hence the pipeline stages may differ in different organizations. Also, the list above is not an extensive breakdown of the pipeline stages but an idea of common stages involved in a typical CI/CD pipeline.

CI/CD best practices

A successful CI/CD pipeline permits teams to rapidly deliver reliable software to end-users, in turn opening the channel to receive timely feedback on the latest release. Based on feedback and analysis done by the team, the CI/CD practices should be constantly refined to best suit the team’s needs.

Commit early, commit often

To fully realize the benefits of Continuous Integration (CI), all developers must share their changes by pushing to the main branch (master). Additionally, all developers must update their working copy to receive everyone else’s changes. A general rule of thumb is to commit to the main branch (master) at least once a day.

Keep the builds green

The CI/CD pipeline provides rapid feedback about changes made to code. The goal should be for the team to constantly keep the code in a state ready to be deployed. This means issues should be addressed as soon as they arise. The benefits of maintaining code in a releasable state are, fixes can quickly be rolled out if something goes wrong in production.

Build only once

Rebuilding software for other environments introduces the risk of inconsistencies. This lowers the team’s confidence levels as there is no sure way to know if all tests have passed. Consequently, the same build artifact should be advocated throughout each CI/CD pipeline stage and eventually released to production.

Streamline tests

Automated testing aids teams in delivering quality software and rapid feedback at a faster pace than traditional methods. However, this does not justify long wait times for results. It’s not recommended to test every eventuality, instead, seek to find a balance between test coverage and performance.

Keep the environment clean

Cleaning up your environments between each deployment is highly beneficial. Since environments typically run for extended periods, tracking the configuration changes and updates applied to each one gets more challenging as time goes on.

Have one path to deployment

Once the decision has been made to invest in building a reliable, fast, and secure CI/CD pipeline, there should be no way to undermine the hard work by allowing the process to be ignored for any reason.

Monitor and measure the pipeline

Monitoring should be implemented as part of the process when setting up the CI/CD pipeline for the sole purpose of providing teams with indications of danger as early as possible.

The team MUST buy-in

Building an effective CI/CD pipeline is a team effort. The purpose of CI/CD is to bring developers and operation teams together, but this requires buy-in from members of each team.

DevOps vs. MLOps

Understanding Machine Learning Operations (MLOps) is an absolute prerequisite for any team working with machine learning in production. MLOps details the methods used to knit together machine learning(ML) and system operations (Ops). Much of the philosophy behind MLOps is derived from its software development parallel, DevOps.

Defining DevOps

Historically, operations teams and development teams worked independently of one another. The introduction of Development Operations (DevOps) was to spark a cultural shift in traditional software development, to bridge the gap between the two teams. Essentially, the goal of DevOps is to transform these siloed processes into a continuous set of unified steps within an organization by automating processes and developing effective feedback loops. For instance, CI/CD is one way DevOps addresses the misalignment between Developers and Operations.

Benefits of DevOps include:

Automation

A best practice of DevOps includes automation of processes. Automation permits teams to ship faster, smaller releases to market with relative ease. More minor improvements significantly reduce the risk of breaking changes, leading to better software quality. Also, development teams wouldn’t be as burdened with tedious, manual tasks. Hence their focus is fixated on developing the best software. This benefits the business and the end-users.

Collaboration

DevOps’ adoption is an organic way to promote collaboration among teams since its purpose was to bring two independent teams together. The practices demand a shared understanding between developers and operations that allow them to take responsibility for software development. This entails increasing transparency and communication across development, operations, and the business as a whole.

Defining MLOps

Machine Learning Operations (MLOps) are methods used to streamline the process of taking experimental machine learning models into production systems, starting from the beginning of the machine learning lifecycle. MLOps aims to bridge the gap between design, model development, and operations by unifying the machine learning workflow. Hence, it flows as one process, simplifying the maintenance process.

Benefits of MLOps include:

Productivity

Manual handover processes cause delays, and being a jack of all trades may be too demanding. Both scenarios cut into the Data Scientists’ time, wrangling data and building models. Without MLOps, businesses productionizing machine learning would have to compromise by opting for what they deem the lesser of the two unideal solutions. Contrastingly, reliable MLOps tooling patches these gaps to allow Data Scientists to focus on their primary task without needing to:

  • be experts of other tasks such as cloud infrastructure
  • cut their time to pass over to another team.

Time to market

In business, time to market is important. Companies seek consistently to enhance the experience for customers and want practical feedback on how we could make things better. MLOps enables enterprises to increase the time to market through practices such as CI/CD. This process depends on the automation of training and retraining procedures, thereby permitting businesses to get systems into production much faster.

Better quality algorithms

Whenever a process is automated, measures should be in place to catch errors. With that being said, MLOps incorporates capabilities to detect issues and alert teams to make the necessary changes. For instance, a common problem in production is model drift – the degradation of a model’s predictive power due to changes within the environment. MLOps tooling should enable teams to measure the changes in the environment before they become costly, permitting teams to ensure the ML model provides the best predictions to end-users.

How are DevOps & MLOps Different?

Since MLOps derived many of its principles from DevOps, it’s not strange that many tasks involved with both functions are pretty similar. For example, both sets of practices encourage previously independent teams to merge and collaborate with the help of automation. This leads to more productive teams, better collaboration, and faster times to market, and better quality products.

However, machine learning introduces additional dependencies, thereby demanding further requirements specific to ML. Due to this, using DevOps to operationalize machine learning is highly impractical.

Ways in which the two differ are as follows:

Version Control

Software development projects best practices would entail that code is tracked under version control which acts as proper documentation whenever changes or updates to the code base. Machine learning projects best practices also entail that code is tracked under version control. However, code is not the only changing input in a machine learning system. Data is required for machine learning to be feasible; therefore, it must also be tracked under version control. Other factors that must be tracked include hyperparameters, parameters, metadata, logs, and model artifacts.

Model monitoring

While it’s true that software development best practices involve monitoring, once software has been deployed to production, it’s safe to say that the software itself would not degrade. Opposingly, machine learning models typically degrade in response to changes in the real world. Monitoring in machine learning detects degrading model performances and updates them before the model impacts business negatively.

Hardware requirements

Software projects aren’t as dependent on hardware as machine learning projects, especially projects requiring deep learning. For large models, training time can take hours to weeks to train, even on a GPU. Thus MLOps demands a more sophisticated setup to allow ML projects to run effectively.

Challenges when deploying machine learning models

A significant number of challenges face any business that wishes to pursue machine learning solutions. Machine learning projects may fail for several reasons. Implementing Machine Learning in an enterprise introduces a few unique features that make deploying machine learning models at scale a problematic task. A non-exhaustive list of such unique features include:

Language discrepancies

Machine learning models are typically integrated as part of a more extensive software system. The majority of machine learning development is performed in Python or R programming languages, whereas businesses typically implement production-based software in languages such as PHP, C++, and Java. This is mainly down to efficiency – Python and R aren’t as fast as the languages mentioned above.

Porting a machine learning model built in Python or R into the production language of the larger system is a highly complex task. Consequently, teams prefer to recode the entire model using the production language, which has its dangers:

  • re-coding models take a significant amount of time
  • teams risk being able to reproduce the research environment, which was programmed in a different language.

Traditional software challenges

Machine learning code makes up only a tiny segment of the entire software system. Hence machine learning projects can be thought of as software engineering projects. Because of this, machine learning projects inherit all of the challenges that accompany traditional software projects. Examples of such challenges include:

  • Reliability — The ability of the software to produce the intended result and to be robust to errors.
  • Reusability — The ability of the software to be used across systems and projects.
  • Maintainability — The ease with which the software can be maintained.
  • Flexibility — The ability to add or remove functionality.

Reproducibility

In a machine learning context, reproducibility may be defined as the ability of a machine learning model to produce the same result when supplied with the same input data across systems. This is a crucial phenomenon in machine learning. Essentially, building reproducible machine learning pipelines helps to reduce errors and ambiguity when transporting it from the research environment into production.

Vastly different teams

The best way to perform machine learning is by diversifying teams. Such diverse teams’ purpose is to ensure the model works reliably and ensure the model delivers the intended results. However, this introduces some challenges when there’s a conflict of interest. For example, teams involved in a machine learning project may consist of:

  • Data Scientists – Responsible for model development and data wrangling
  • Machine Learning Engineers/Software Developers – Responsible for integrating the model into the larger software system
  • Business Professionals – The knowledge base that understands how the model is used in the organization, problems the model should solve, and the customers.

When is CI/CD not feasible?

CI/CD can take software products to the next level, but this is not without flaws. Implementing CI/CD may require teams to change their software development culture. This may be difficult for some, especially those who do not see a reason to.

The technical aspects introduced by CI/CD are much easier to implement than the culture shift. It may be difficult for developers to:

  • Do regular checks on their work throughout the day
  • Always pull from the repository before a coding session
  • Write good tests to add to the automatic test suite

Leveraging tools such as Jenkins and CircleCI is effective as they trigger builds on check-ins and send out reports to the necessary developers. If developers fail to acknowledge reports provided by such services or take action upon them immediately (when they’ve been noticed), then they are not implementing CI/CD effectively. Carrying out CI/CD involves stopping development to work on fixes once they’ve been identified.

Final thoughts

CI/CD is a best practice of DevOps, of which MLOps was born and derived many of its principles. The main idea behind MLOps is to bridge the gap between design, model development, and operations by unifying the Machine Learning Workflow. This is done by automating tasks but due to the additional dependencies introduced in Machine Learning projects, further requirements are demanded in MLOps to ensure Machine Learning projects can be released quickly, reliably, and sustainably.

Oct 13th 2021
read

Share this post

Try Layer for free

Get started with Layers Beta

Start Free