# Data Science

A collection of Data Science and Data Visualisation related posts, pics and thoughts. Take a look and enjoy.

## CoreML - iOS Implementation for the Boston Model (part 3) - Button

We are very close at getting a functioning app for our Boston Model. In the last post we were able to put together the code that fills in the values in the picker and were able to "pick" the values shown for crime rate and number of rooms respectively. These values are fed to the model we built in one of the earlier posts of this series and the idea is that we will action this via a button that triggers the calculation of the prediction. In turn the prediction will be shown in a floating dialogue box.

In this post we are going to activate the functionality of the button and show the user the values that have been picked. With this we will be ready to weave in the CoreML model in the final post of this series. So, what are we waiting for? Let us launch Xcode and get working. We have already done a bit of work for the button in the previous post where we connected the button to the ViewController generating a line of code that read as follows:

@IBOutlet weak var predictButton: UIButton!

If we launch the application and click on the button, sadly, nothing will happen. Let's change that: in the definition of the UIViewController class, after the didReceiveMemoryWarning function write the following piece of code:

@IBAction func getPrediction() {
let selectedCrimeRow = inputPicker.selectedRow(inComponent: inputPredictor.crime.rawValue)
let crime = crimeData[selectedCrimeRow]

let selectedRoomRow = inputPicker.selectedRow(inComponent: inputPredictor.rooms.rawValue)
let rooms = roomData[selectedRoomRow]

let message = "The picked values are Crime: \(crime) and Rooms: \(rooms)"

message: message,

let action = UIAlertAction(title: "OK", style: .default,
handler: nil)

}

The first four lines of the getPrediction function takes the values from the picker and creates some constants for crime and rooms that will then be used in a message to be displayed in the application. We are telling Xcode to treat this message as an alert and ask it to present it to the user (last line in the code above). What we need to do now is tell Xcode that this function is to be triggered when we click on the button.

There are several way we can connect the button with the code above. In this case we are going to go to the Main.storyboard, control+click on the button and drag. This will show an arrow, we need to connect that arrow with the View Controller icon (a yellow circle with a white square inside) at the top of the view controller window we are putting together. When you let go, you will see a drop-down menu. From there, under "Sent Events" select the function we created above, namely getPrediction. See the screenshots below:

You can now run the application. Select a number from each of the columns in the picker, and when ready, prepare to be amazed: Click on the "Calculate Prediction" button, et voilà - you will see a new window telling you the values you have just picked. Tap "OK" and start again!

In the next post we will add the CoreML model, and modify the event for the button to take the two values picked and calculate a prediction which in turn will be shown in the floating window. Stay tuned.

You can look at the code (in development) in my github site here.

## nteract - a great Notebook experience

I am a supporter of using Jupyter Notebooks for data exploration and code prototyping. It is a great way to start writing code and immediately get interactive feedback. Not only can you document your code there using markdown, but also you can embed images, plots, links and bring your work to life.

Nonetheless, there are some little annoyances that I have, for instance the fact that I need to launch a Kernel to open a file and having to do that "the long way" - i.e. I cannot double-click on the file that I am interested in seeing. Some ways to overcome this include looking at Gihub versions of my code as the notebooks are rendered automatically, or even saving HTML or PDF versions of the notebooks. I am sure some of you may have similar solutions for this.

Last week, while looking for entries on something completely different, I stumbled upon a post that suggested using nteract. It sounded promising and I took a look. It turned out to be related to the Hydrogen package available for Atom, something I have used in the past and loved it. nteract was different though as it offered a desktop version and other goodies such as in-app support for publishing, a terminal-free experience sticky cells, input and output hiding... Bring it on!

I just started using it, and so far so good. You may want to give it a try, and maybe even contribute to the git repo.

## Intro to Data Science Talk

Full room and great audience at General Assembly his evening. Lots of thoughtful questions and good discussion.

## CoreML - iOS Implementation for the Boston Model (part 2) - Filling the Picker

Right! Where were we? Yes, last time we put together a skeleton for the CoreML Boston Model application that will take two inputs (crime rate and number of rooms) and provide a prediction of the price of a Boston property (yes, based on somewhat all prices...). We are making use of three three labels, one picker and one button.

Let us start creating variables to hold the potential values for the input variables. We will do this in the ViewController by selecting this file from the left-hand side menu:

Inside the ViewController class definition enter the following variable assignments:

let crimeData = Array(stride(from: 0.1, through: 0.3, by: 0.01))
let roomData = Array(4...9)

These values are informed by the data exploration we carried out in an earlier post. We are going to use the arrays defined above to populate the values that will be shown in our picker. For this we need to define a data source for the picker and make sure that there are two components to choose values from.

Before we do any of that we need to connect the view from our storyboard to the code, in particular we need to create outlets for the picker and for the button. Select the Main.storyboard from the menu in the left-hand side. With the Main.storyboard in view, in the top right-hand corner of Xcode you will see a button with an icon that has two intersecting circles, click on that icon. you will now see the storyboard side-by-side with the code. While pressing the Control key, select the picker by clicking on it; without letting go drag into the code window (you will see an arrow appear as you drag):

You will se a dialogue window where you can now enter a name for the element in your Storyboard. In this case I am calling my picker inputPicker, as shown in the figure on the left. After pressing the "connect" button a new line of code appears and you will see a small circle on top of the code line number indicating that a connection with the Storyboard has been made. Do the same for the button and call it predictButton.

In order to make our life a little bit easier, we are going to bundle together the input values. At the bottom of the ViewController code write the following:

enum inputPredictor: Int {
case crime = 0
case rooms
}

We have define an object called inputPredictor that will hold the values of for crime and rooms. In turn we will use this object to populate the picker as follows: In the same ViewController file, after the class definition that is provided in the project by  default we are going to write an extension for the data source. Write the following code:

extension ViewController: UIPickerViewDataSource {

func numberOfComponents(in pickerView: UIPickerView) -> Int {
return 2
}

func pickerView(_ pickerView: UIPickerView,
numberOfRowsInComponent component: Int) -> Int {
guard let inputVals = inputPredictor(rawValue: component) else {
fatalError("No predictor for component")
}

switch inputVals {
case .crime:
return crimeData.count
case .rooms:
return roomData.count
}
}
}

With the function numberOfComponents we are indicating that we want to have 2 components in this view. Notice that inside the pickerView function we are creating a constant inputVals defined by the values from inputPredictor.  So far we have indicated where the values for the picker come from, but we have not delegated the actions that can be taken with those values, namely displaying them and picking them (after all, this element is a picker!) so that we can use the values elsewhere. If you were to execute this app, you will see an empty picker...

OK, so what we need to do is create the UIPickerViewDelegate, and we do this by entering the following code right under the previous snippet:

extension ViewController: UIPickerViewDelegate {
func pickerView(_ pickerView: UIPickerView, titleForRow row: Int,
forComponent component: Int) -> String? {
guard let inputVals = inputPredictor(rawValue: component) else {
fatalError("No predictor for component")
}

switch inputVals {
case .crime:
return String(crimeData[row])
case .rooms:
return String(roomData[row])
}
}

func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int,
inComponent component: Int) {
guard let inputVals = inputPredictor(rawValue: component) else {
fatalError("No predictor for component")
}

switch inputVals {
case .crime:
print(String(crimeData[row]))
case .rooms:
print(String(roomData[row]))
}

}
}


In the first function we are defining what values are supposed to be shown for the titleForRow in the picker, and we do this for each of the two elements we have, i.e. crime and rooms. In the second function we are defining what happens when we didSelectRow, in other words select the value that is being shown by each of the two elements in the picker. Not too bad, right?

Well, if you were to run this application you will still see no change in the picker... Why is that? The answer is that we need to let the application know what needs to be show when the elements load. Go back to the top of the code (around line 20 or so) below the code lines that defined the outlets for the picker and the button. There write the following code:

override func viewDidLoad() {
// Picker data source and delegate
inputPicker.dataSource = self
inputPicker.delegate = self
}

OK, we can now run the application: On the top left-hand side of the Xcode window you will see a play button; clicking on it will launch the Simulator and you will be able to see your picker working. Go on, select a few values from each of the elements:

In the next post we will write code to activate the button to run a prediction using our CoreML model with the values selected from the picker and show the result to the user. Stay tuned!

You can look at the code (in development) in my github site here.

## CoreML - iOS App Implementation for the Boston Price Model (Part 1)

Hey! How are things? I hope the beginning of the year is looking great for you all. As promised, I am back to continue the open notebook for the implementation of a Core ML model in a simple iOS app. In one of the previous post we created a linear regression model to predict prices for Boston properties (1970 prices that is!) based on two inputs: the crime rate per capita in the area and the average number of rooms in the property. Also, we saw (in a different post) the way in which Core ML implements the properties of the model to be used in an iOS app to carry out the prediction on device!

In this post we will start building the iOS app that will use the model to enable our users to generate a prediction based on input values for the parameters used in the model. Our aim is to build a simple interface where the user enters the values and the predicted price is shown. Something like the following screenshot:

You will need to have access to a Mac with the latest version Xcode. At the time of writing I am using Xcode 9.2. We will cover the development of the app, but not so much the deployment (we may do so in case people make it known to me that there is interest).

In Xcode we will select the "Create New Project" and in the next dialogue box, from the menu at the top make sure that you select "iOS" and from the options shown, please select the "Single View App" option and then click the "Next" button.

This will create an iOS app with a single page. If you need more pages/views, this is still a good place to start, as you can add further "View Controllers" while you develop the app. Right, so in the next dialogue box Xcode will be asking for options to create the new project. Give your project a name, something that makes it easier to elucidate what your project is about. In this case I am calling the project "BostonPricer". You can also provide the name of a team (team of developers contributing to your app for instance) as well as an organisation name and identifier. In our case these are not that important and you can enter any suitable values you desire. Please note that this becomes more important in case you are planning to send your app for approval to Apple. Anyway, make sure that you select "Swift" as the programming language and we are leaving the option boxes for "Use Core Data", "Include Unit Tests" and "Include UI Tests" unticked. I am redacting some values below:

On the left-hand side menu, click on the "Main.storyboard". This is the main view that our users will see and interact with. It is here where we will create the design, look-and-feel and interactions in our app.

We will start placing a few objects in our app, some of them will be used simple to display text (labels and information), whereas others will be used to create interactions, in particular to select input values and to generate the prediction. To do that we will use the "Object Library". In the current window of Xcode, on the bottom-right corner you will see an icon that looks like a little square inside a circle; this is the "Show the Object Library" icon. When you select it, at the bottom of the area you will see a search bar. There you will look for the following objects:

• Label
• Picker View
• Button

You will need three labels, one picker and one button. You can drag each of the elements from the "Object Library" results shown and into the story board. You can edit the text for the labels and the button by double clicking on them. Do not worry about the text shown for the picker; we will deal with these values in future posts. Arrange the elements as shown in the screenshot below:

OK, so far so good. In the next few posts we will start creating the functionality for each of these elements and implement the prediction generated by the model we have developed. Keep in touch.

You can look at the code (in development) in my github site here.

## CoreML - Model properties

If you have been following the posts in this open notebook, you may know that by now we have managed to create a linear regression model for the Boston Price dataset based on two predictors, namely crime rate and average number of rooms. It is by no means the best model out there ad our aim is to explore the creation of a model (in this case with Python) and convert it to a Core ML model that can be deployed in an iOS app.

Before move on to the development of the app, I thought it would be good to take a look at the properties of the converted model. If we open the PriceBoston.mlmodel we saved in the previous post (in Xcode of course) we will see the following information:

We can see the name of the model (PriceBoston) and the fact that it is a "Pipeline Regressor". The model can be given various attributes such as Author, Description, License, etc. We can also see the listing of the Model Evaluation Parameters in the form of Inputs (crime rate and number of rooms) and Outputs (price). There is also an entry to describe the Model Class (PriceBoston) and without attaching this model to a target the class is actually not present. Once we make this model part of a target inside an app, Xcode will generate the appropriate code

Just to give you a flavour of the code that will be generated when we attach this model to a target, please take a look at the screenshot below:

You can see that the code was generated automatically (see the comment at the beginning of the Swift file). The code defines the input variables and feature names, defines a way to extract values out of the input strings, sets up the model output and other bits and pieces such as defining the class for model loading and prediction (not shown). All this is taken care of by Xcode, making it very easy for us to use the model in our app. We will start building that app in the following posts (bear with me, I promise we will get there).

Enjoy!

## CoreML - Building the model for Boston Prices

In the last post we have taken a look at the Boston Prices dataset loaded directly from Scikit-learn. In this post we are going to build a linear regression model and convert it to a .mlmodel to be used in an iOS app.

We are going to need some modules:

import coremltools
import pandas as pd
from sklearn import datasets, linear_model
from sklearn.model_selection import train_test_split
from sklearn import metrics
import numpy as np

The cormeltools is the module that will enable the conversion to use our model in iOS.

Let us start by defining a main function to load the dataset:

def main():
boston_df = pd.DataFrame(boston.data)
boston_df.columns = boston.feature_names
print(boston_df.columns)

In the code above we have loaded the dataset and created a pandas dataframe to hold the data and the names of the columns. As we mentioned in the previous post, we are going to use only the crime rate and the number of rooms to create our model:

    print("We now choose the features to be included in our model.")
X = boston_df[['CRIM', 'RM']]
y = boston.target

Please note that we are separating the target variable from the predictor variables. Although this dataset in not too large, we are going to follow best practice and split the data into training and testing sets:

    X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=7)

We will only use the training set in the creation of the model and will test with the remaining data points.

    my_model = glm_boston(X_train, y_train)

The line of code above assumes that we have defined the function gym_boston as follows:

def glm_boston(X, y):
print("Implementing a simple linear regression.")
lm = linear_model.LinearRegression()
gml = lm.fit(X, y)
return gml

Notice that we are using the LinearRegression implementation in Scikit-learn. Let us go back to the main function we are building and extract the coefficients for our linear model. Refer to the CoreML - Linear Regression post to remember that type of model that we are building is of the form $y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon$:

    coefs = [my_model.intercept_, my_model.coef_]
print("The intercept is {0}.".format(coefs[0]))
print("The coefficients are {0}.".format(coefs[1]))

We can also take a look at some metrics that tell let us evaluate our model against the test data:

    # calculate MAE, MSE, RMSE
print("The mean absolute error is {0}.".format(
metrics.mean_absolute_error(y_test, y_pred)))
print("The mean squared error is {0}.".format(
metrics.mean_squared_error(y_test, y_pred)))
print("The root mean squared error is {0}.".format(
np.sqrt(metrics.mean_squared_error(y_test, y_pred))))

## CoreML conversion

And now for the big moment: We are going to convert our model to an .mlmodel object!! Ready?

    print("Let us now convert this model into a Core ML object:")
# Convert model to Core ML
coreml_model = coremltools.converters.sklearn.convert(my_model,
input_features=["crime", "rooms"],
output_feature_names="price")
# Save Core ML Model
coreml_model.save("PriceBoston.mlmodel")
print("Done!")

We are using the sklearn.convert method of coremltools.converters to create the my_model model with the necessary inputs (i.e. crime and rooms) and output (price). Finally we save the model in a file with the name PriceBoston.mlmodel.

Et voilà! In the next post we will start creating an iOS app to use the model we have just built.

You can look at the code (in development) in my github site here.

## CoreML - Boston Prices exploration

In the previous post of this series we described some of the basics of linear regression, one of the most well-known models in machine learning. We saw that we can relate the values of input parameters $x_i$ to the target variable $y$ to be predicted. In this post we are going to create a linear regression model to predict the price of houses in Boston (based on valuations from 1970s). The dataset provides information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE), average number of rooms (RM) as well as the median value of homes in \$1000s (MEDV) as well as other attributes.

Let us start by exploring the data. We are going to use Scikit-learn and fortunately the dataset comes with the module. The input variables are included in the data method and the price is given by the target. We are going to load the input variables in the dataframe boston_df and the prices in the array y:

from sklearn import datasets
import pandas as pd
boston_df = pd.DataFrame(boston.data)
boston_df.columns = boston.feature_names
y = boston.target

We are going to build our model using only a limited number of inputs. In this case let us pay attention to the average number of rooms and the crime rate:

X = boston_df[['CRIM', 'RM']]
X.columns = ['Crime', 'Rooms']
X.describe()

The description of these two attributes is as follows:

            Crime       Rooms
count  506.000000  506.000000
mean     3.593761    6.284634
std      8.596783    0.702617
min      0.006320    3.561000
25%      0.082045    5.885500
50%      0.256510    6.208500
75%      3.647423    6.623500
max     88.976200    8.780000

As we can see the minimum number of rooms is 3.5 and the maximum is 8.78, whereas for the crime rate the minimum is 0.006 and the maximum value is 88.97, nonetheless the median is 0.25. We will use some of these values to define the ranges that will be provided to our users to find price predictions.

Finally, let us visualise the data:

We shall bear these values in mind when building our regression model in subsequent posts.

You can look at the code (in development) in my github site here.

## CoreML - Linear Regression

Hello again, where were we? ... Oh yes, we have been discussing CoreML and have even set up an appropriate python 2 environment to work with CoreML. In this post we are going to cover some of the most basic aspects of the workhorse of machine learning: the dependable linear regression model.

We are indeed all familiar with a line of best fit, and I am sure that many of us remember doing some by hand (you know who you are) and who hasn't played with Excel's capabilities? In a nutshell, a linear regression is a model that relates a variable $y$ to one or more explanatory (or independent) variables $X$. The parameters that define the model are estimated from the available data and there are a number of assumptions about the explanatory variables and you can find more information in my Data Science and Analytics with Python book. We can think of the goal of a linear regression model to draw a line though the data as exemplified in the plot below:

Let us take the case of 2 independent variables $x_1$ and $x_2$. The linear regression model to predict our target variable $y$ is given by:

$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon$,

where $\alpha$and $\beta_i$ are the parameters to be estimated to help us generate predictions. With the aid of techniques such as least squares can estimate the parameters $\alpha, \beta_1$ and $\beta_2$ by minimising the sum of the squares of the residuals, i,.e the difference between an observed value, and the fitted value provided by a model. Once we have determined the parameters, we are able to score new (unseen) data for $x_1$ and $x_2$ to predict the value of $y$.

In the next post we will show how we can do this for the Boston House Prices dataset using a couple of variables such as number of bedrooms in the property and a crime index for the area. Remember that the aim will be to show how to build the model to be used with CoreML and not a perfect model for the prediction.

Keep in touch.

-j

## Core ML - Preparing the environment

Hello again! In preparation to training a model to be converted by Core ML to be used in an application, I would like to make sure we have a suitable environment to work on. One of the first things that came to my attention looking at the coreml module is the fact that it only supports Python 2! Yes, you read correctly, you will have to make sure you use Python 2.7 if you want to make this work. As you probably know, Python 2 will be retired in 2020, so I hope that Apple is considering in their development cycles. In the meantime you can see the countdown to Python 2's retirement here, and thanks Python 2 for the many years of service...

Anyway, if you are a Python 2 user, then you are good to go. If on the other hand you have moved with the times you may need to make appropriate installations. I am using Anaconda (you may use your favourite distro) and I will be creating a conda environment (I'm calling it coreml) with Python 2.7 and some of the libraries I will be using:

> conda create --name coreml python=2.7 ipython jupyter scikit-learn

> source activate coreml

(coreml) > pip install coremltools

I am sure there may be other modules that will be needed, and I will make appropriate installations (and additions to this post) as that becomes clearer.

You can get a look at Apple's coremltools github repo here.

ADDITIONS: As I mentioned, there may have been other modules that needed installing in the new environment here is a list:

• pandas
• matplotlib
• pillow

## Core ML - What is it?

In a previous post I mentioned that I will be sharing some notes about my journey with doing data science and machine learning by Apple technology. This is the firsts of those posts and here I will go about what Core ML is...

Core ML is a computer framework. So what is a framework?  Well, in computer terms is a software abstraction that enables generic functionality to be modified as required by the user to transform it into software for specific purposes to enable the development of a system or even a humble project.

So Core ML is an Apple provided framework to speed apps that use trained machine learning models. Notice that word in bold - trained - is part of the description of the framework. This means that the model has to be developed externally with appropriate training data for the specific project in mind. For instance if you are interested in building a classifier that distinguishes cats from cars, then you need to train the model with lots of cat and car images.

As it stands Core ML supports a variety of machine learning models, from generalised linear models (GLMs for short) to neural nets. Furthermore it helps with the tests of adding the trained machine learning model to your application by automatically creating a custom programmatic interface that supplies an APU to your model. All this within the comfort of Xcode!

There is an important point to remember. The model has to be developed externally from Core ML, in other words you may want to use your favourite machine learning framework (that word again), computer language and environment to cover the different aspects of the data science workflow. You can read more in that in Chapter 3 of my "Data Science and Analytics with Python" book. So whether you use Scikit-learnm, Keras or Caffe, the model you develop has to be trained (tested and evaluated) beforehand. Once you are ready, then Core ML will support you in bringing it to the masses via your app.

As mentioned in the Core ML documentation:

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.

OK, so in the next few posts we will be using Python and coreml tools to generate a so-called .mlmodel file that Xcode can use and deploy. Stay tuned!

## What Is Artificial Intelligence?

Original article by JF Puget here.

Here is a question I was asked to discuss at a conference last month: what is Artifical Intelligence (AI)?  Instead of trying to answer it, which could take days, I decided to focus on how AI has been defined over the years.  Nowadays, most people probably equate AI with deep learning.  This has not always been the case as we shall see.

Most people say that AI was first defined as a research field in a 1956 workshop at Dartmouth College.  Reality is that is has been defined 6 years earlier by Alan Turing in 1950.  Let me cite Wikipedia here:

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviorequivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

The test was introduced by Turing in his paper, "Computing Machinery and Intelligence", while working at the University of Manchester(Turing, 1950; p. 460).[3] It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[4] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[5] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[6]

So, the first definition of AI was about thinking machines.  Turing decided to test thinking via a chat.

The definition of AI rapidly evolved to include the ability to perform complex reasoning and planing tasks.  Early success in the 50s led prominent researchers to make imprudent predictions about how AI would become a reality in the 60s.  The lack of realization of these predictions led to funding cut known as the AI winter in the 70s.

In the early 80s, building on some success for medical diagnosis, AI came back with expert systems.  These systems were trying to capture the expertise of humans in various domains, and were implemented as rule based systems.  This was the days were AI was focusing on the ability to perform tasks at best human expertise level.  Success like IBM Deep Blue beating the chess world champion, Gary Kasparov, in  1997 was the acme of this line of AI research.

Let's contrast this with today's AI.  The focus is on perception: can we have systems that recognize what is in a picture, what is in a video, what is said in a sound track?  Rapid progress is underway for these tasks thanks to the use of deep learning.  Is it AI still?  Are we automating human thinking?  Reality is we are working on automating tasks that most humans can do without any thinking effort. Yet we see lots of bragging about AI being a reality when all we have is some ability to mimic human perception.  I really find it ironic that our definition of intelligence is that of mere perception  rather than thinking.

Granted, not all AI work today is about perception.  Work on natural language processing (e.g. translation) is a bit closer to reasoning than mere perception tasks described above.  Success like IBM Watson at Jeopardy, or Google AlphaGO at Go are two examples of the traditional AI aiming at replicate tasks performed by human experts.    The good news (to me at least) is that the progress is so rapid on perception that it will move from a research field to an engineering field in the coming years.  We will then see a re-positioning of researchers on other AI related topics such as reasoning and planning.  We'll be closer to Turing's initial view of AI.

## Now... presenting at ODSC Europe

Data science is definitely in everyone’s lips and this time I had the opportunity of showcasing some of my thoughts, practices and interests at the Open Data Science Conference in London.

The event was very well attended by data scientists, engineers and developers at all levels of seniority, as well as business stakeholders. I had the great opportunity to present the landscape that newcomers and seasoned practitioners must be familiar with to be able to make a successful transition into this exciting field.

It was also a great opportunity to showcase “Data Science and Analytics with Python” and to get to meet new people including some that know other members of my family too.

-j

## Data Science and Analytics with Python - New York Team

Earlier this week I received this picture of the team in New York. As you can see they have recently all received a copy of my "Data Science and Analytics with Python" book.

Thanks guys!

## Python overtakes R - Reblog

Did you use R, Python (along with their packages), both, or other tools for Analytics, Data Science, Machine Learning work in 2016 and 2017?

Python did not quite "swallow" R, but the results, based on 954 voters, show that in 2017 Python ecosystem overtook R as the leading platform for Analytics, Data Science, Machine Learning.

While in 2016 Python was in 2nd place ("Mainly Python" had 34% share vs 42% for "Mainly R"), in 2017 Python had 41% vs 36% for R.

The share of KDnuggets readers who used both R and Python in significant ways also increased from 8.5% to 12% in 2017, while the share who mainly used other tools dropped from 16% to 11%.

Fig. 1: Share of Python, R, Both, or Other platforms usage for Analytics, Data Science, Machine Learning, 2016 vs 2017

Next, we examine the transitions between the different platforms.

Fig. 2: Analytics, Data Science, Machine Learning Platforms
Transitions between R, Python, Both, and Other from 2016 to 2017

This chart looks complicated, but we see two key aspects, and Python wins on both:

• Loyalty: Python users are more loyal, with 91% of 2016 Python users staying with Python. Only 74% of R users stayed, and 60% of other platforms users did.
• Switching: Only 5% of Python users moved to R, while twice as many - 10% of R users moved to Python. Among those who used both in 2016, only 49% kept using both, 38% moved to Python, and 11% moved to R.

Net we look at trends across multiple years.

In our 2015 Poll on R vs Python we did not offer an option for "Both Python and R", so to compare trends across 4 years, we replace the shares of Python and R in 2016 and 2017 by
Python* = (Python share) + 50% of (Both Python and R)
R* = (R share) + 50% of (Both Python and R)

We see that share of R usage is slowly declining (from about 50% in 2015 to 36% in 2017), while Python share is steadily growing - from 23% in 2014 to 47% in 2017. The share of other platforms is also steadily declining.

Fig. 3: Python vs R vs Other platforms for Analytics, Data Science, and Machine Learning, 2014-17

Finally, we look at trends and patterns by region. The regional participation was:

• Europe, 35%
• Asia, 12.5%
• Latin America, 6.2%
• Africa/Middle East, 3.6%
• Australia/NZ, 3.1%

To simplify the chart we split "Both" votes among R and Python, as above, and also combine 4 regions with smaller participation of Asia, AU/NZ, Latin America, and Africa/Middle East into one "Rest" region.

Fig. 4: Python* vs R* vs Rest by Region, 2016 vs 2017

We observe the same pattern across all regions:

• increase in Python share, by 8-10%
• decline in R share, by about 2-4%
• decline in other platforms, by 5-7%

The future looks bright for Python users, but we expect that R and other platforms will retain some share in the foreseeable future because of their large embedded base.