CoreML – Model properties

If you have been following the posts in this open notebook, you may know that by now we have managed to create a linear regression model for the Boston Price dataset based on two predictors, namely crime rate and average number of rooms. It is by no means the best model out there ad our aim is to explore the creation of a model (in this case with Python) and convert it to a Core ML model that can be deployed in an iOS app.

Before move on to the development of the app, I thought it would be good to take a look at the properties of the converted model. If we open the PriceBoston.mlmodel we saved in the previous post (in Xcode of course) we will see the following information:

We can see the name of the model (PriceBoston) and the fact that it is a “Pipeline Regressor”. The model can be given various attributes such as Author, Description, License, etc. We can also see the listing of the Model Evaluation Parameters in the form of Inputs (crime rate and number of rooms) and Outputs (price). There is also an entry to describe the Model Class (PriceBoston) and without attaching this model to a target the class is actually not present. Once we make this model part of a target inside an app, Xcode will generate the appropriate code

Just to give you a flavour of the code that will be generated when we attach this model to a target, please take a look at the screenshot below:


You can see that the code was generated automatically (see the comment at the beginning of the Swift file). The code defines the input variables and feature names, defines a way to extract values out of the input strings, sets up the model output and other bits and pieces such as defining the class for model loading and prediction (not shown). All this is taken care of by Xcode, making it very easy for us to use the model in our app. We will start building that app in the following posts (bear with me, I promise we will get there).

Enjoy!

CoreML – Building the model for Boston Prices

In the last post we have taken a look at the Boston Prices dataset loaded directly from Scikit-learn. In this post we are going to build a linear regression model and convert it to a .mlmodel to be used in an iOS app.

We are going to need some modules:

import coremltools
import pandas as pd
from sklearn import datasets, linear_model
from sklearn.model_selection import train_test_split
from sklearn import metrics
import numpy as np

The cormeltools is the module that will enable the conversion to use our model in iOS.

Let us start by defining a main function to load the dataset:

def main():
    print('Starting up - Loading Boston dataset.')
    boston = datasets.load_boston()
    boston_df = pd.DataFrame(boston.data)
    boston_df.columns = boston.feature_names
    print(boston_df.columns)

In the code above we have loaded the dataset and created a pandas dataframe to hold the data and the names of the columns. As we mentioned in the previous post, we are going to use only the crime rate and the number of rooms to create our model:

    print("We now choose the features to be included in our model.")
    X = boston_df[['CRIM', 'RM']]
    y = boston.target

Please note that we are separating the target variable from the predictor variables. Although this dataset in not too large, we are going to follow best practice and split the data into training and testing sets:

    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=7)

We will only use the training set in the creation of the model and will test with the remaining data points.

    my_model = glm_boston(X_train, y_train)

The line of code above assumes that we have defined the function glm_boston as follows:

def glm_boston(X, y):
    print("Implementing a simple linear regression.")
    lm = linear_model.LinearRegression()
    gml = lm.fit(X, y)
    return gml

Notice that we are using the LinearRegression implementation in Scikit-learn. Let us go back to the main function we are building and extract the coefficients for our linear model. Refer to the CoreML – Linear Regression post to remember that type of model that we are building is of the form  y=\alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon:

    coefs = [my_model.intercept_, my_model.coef_]
    print("The intercept is {0}.".format(coefs[0]))
    print("The coefficients are {0}.".format(coefs[1]))

We can also take a look at some metrics that tell let us evaluate our model against the test data:

    # calculate MAE, MSE, RMSE
    print("The mean absolute error is {0}.".format(
        metrics.mean_absolute_error(y_test, y_pred)))
    print("The mean squared error is {0}.".format(
        metrics.mean_squared_error(y_test, y_pred)))
    print("The root mean squared error is {0}.".format(
        np.sqrt(metrics.mean_squared_error(y_test, y_pred))))

CoreML conversion

And now for the big moment: We are going to convert our model to an .mlmodel object!! Ready?

    print("Let us now convert this model into a Core ML object:")
    # Convert model to Core ML
    coreml_model = coremltools.converters.sklearn.convert(my_model,
                                        input_features=["crime", "rooms"],
                                        output_feature_names="price")
    # Save Core ML Model
    coreml_model.save("PriceBoston.mlmodel")
    print("Done!")

We are using the sklearn.convert method of coremltools.converters to create the my_model model with the necessary inputs (i.e. crime and rooms) and output (price). Finally we save the model in a file with the name PriceBoston.mlmodel.

Et voilà! In the next post we will start creating an iOS app to use the model we have just built.

You can look at the code (in development) in my github site here.

CoreML – Boston Prices exploration

In the previous post of this series we described some of the basics of linear regression, one of the most well-known models in machine learning. We saw that we can relate the values of input parameters x_i to the target variable y to be predicted. In this post we are going to create a linear regression model to predict the price of houses in Boston (based on valuations from 1970s). The dataset provides information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE), average number of rooms (RM) as well as the median value of homes in $1000s (MEDV) as well as other attributes.

Let us start by exploring the data. We are going to use Scikit-learn and fortunately the dataset comes with the module. The input variables are included in the data method and the price is given by the target. We are going to load the input variables in the dataframe boston_df and the prices in the array y:

from sklearn import datasets
import pandas as pd 
boston = datasets.load_boston() 
boston_df = pd.DataFrame(boston.data)
boston_df.columns = boston.feature_names
y = boston.target

We are going to build our model using only a limited number of inputs. In this case let us pay attention to the average number of rooms and the crime rate:

X = boston_df[['CRIM', 'RM']]
X.columns = ['Crime', 'Rooms']
X.describe()

The description of these two attributes is as follows:

            Crime       Rooms
count  506.000000  506.000000
mean     3.593761    6.284634
std      8.596783    0.702617
min      0.006320    3.561000
25%      0.082045    5.885500
50%      0.256510    6.208500
75%      3.647423    6.623500
max     88.976200    8.780000

As we can see the minimum number of rooms is 3.5 and the maximum is 8.78, whereas for the crime rate the minimum is 0.006 and the maximum value is 88.97, nonetheless the median is 0.25. We will use some of these values to define the ranges that will be provided to our users to find price predictions.

Finally, let us visualise the data:

We shall bear these values in mind when building our regression model in subsequent posts.

You can look at the code (in development) in my github site here.