Sometimes you find yourself writing the same code over and over. When that starts happening you know it's time to take what you've learned and create a reusable piece of code that can be applied in the future. Because of the experience that we've gained in writing previous blog posts, I think that it is a good time to make a reusable service that can host any number of machine learning models.
read moreIntroducing the ml_base Package
The ml_base package defines a common set of base classes that are useful for working with machine learning model prediction code. The base classes define a set of interfaces that help to write ML code that is reusable and testable. The core of the ml_base package is the MLModel class which defines a simple interface for doing machine learning model prediction. I this blog post, we'll show how to use the MLModel class.
read more10 Ways to Deploy a Machine Learning Model
In previous blog posts we've seen how it is possible to deploy the same model in ten different ways. The model itself was developed one time and released as a package, which was then used in each deployment. These blog posts started as an exercise in finding new and interesting ways to deploy an ML model, so we decided to write this blog post about some of the things that we've learned along the way. In order to be able to deploy the same model in 10 different ways, we needed to build the model so that it was not incompatible with all the different ways we wanted to deploy it. We also needed to make it easy to install and to make sure that the model published metadata about itself. All of these features of the model became very important once we needed to deploy it into a real software system.
read moreAn Apache Beam ML Model Deployment
Data processing pipelines are useful for solving a wide range of problems. For example, an Extract, Transform, and Load (ETL) pipeline is a type of data processing pipeline that is used to extract data from one system and save it to another system. Inside of an ETL, the data may be transformed and aggregated into more useful formats. ETL jobs are useful for making the predictions made by a machine learning model available to users or to other systems. The ETL for such an ML model deployment lookslike this: extract features used for prediction from a source system, send the features to the model for prediction, and save the predictions to a destination system. In this blog post we will show how to deploy a machine learning model inside of a data processing pipeline that runs on the Apache Beam framework.
read moreA ZeroRPC ML Model Deployment
There are many different ways for two software processes to communicate with each other. When deploying a machine learning model, it's often simpler to isolate the model code inside of its own process. Any code that needs to use the model to make predictions then needs to communicate with the process that is running the model code to make predictions. This approach is easier than embedding the model code in the process that needs the predictions because it saves us the trouble of recreating the model's algorithm in the programming language of the process that needs the predictions. RPC calls are also used widely to connect code that is executing in different processes. In the last few years, the rise in popularity of microservice architectures has also caused the rise in popularity of RPC for integrating systems.
read more