Data Science

A collection of Data Science and Data Visualisation related posts, pics and thoughts. Take a look and enjoy.

Now... presenting at ODSC Europe

Data science is definitely in everyone’s lips and this time I had the opportunity of showcasing some of my thoughts, practices and interests at the Open Data Science Conference in London.

The event was very well attended by data scientists, engineers and developers at all levels of seniority, as well as business stakeholders. I had the great opportunity to present the landscape that newcomers and seasoned practitioners must be familiar with to be able to make a successful transition into this exciting field.

It was also a great opportunity to showcase “Data Science and Analytics with Python” and to get to meet new people including some that know other members of my family too.


Read me...

Data Science and Analytics with Python - New York Team

Earlier this week I received this picture of the team in New York. As you can see they have recently all received a copy of my "Data Science and Analytics with Python" book.

Thanks guys!


Read me...

Python overtakes R - Reblog

Did you use R, Python (along with their packages), both, or other tools for Analytics, Data Science, Machine Learning work in 2016 and 2017?

Python did not quite "swallow" R, but the results, based on 954 voters, show that in 2017 Python ecosystem overtook R as the leading platform for Analytics, Data Science, Machine Learning.

While in 2016 Python was in 2nd place ("Mainly Python" had 34% share vs 42% for "Mainly R"), in 2017 Python had 41% vs 36% for R.

The share of KDnuggets readers who used both R and Python in significant ways also increased from 8.5% to 12% in 2017, while the share who mainly used other tools dropped from 16% to 11%.

Python, R, Other Analytics, Data Science platform, 2016-2017
Fig. 1: Share of Python, R, Both, or Other platforms usage for Analytics, Data Science, Machine Learning, 2016 vs 2017

Next, we examine the transitions between the different platforms.

Python vs R vs Other, 2016 to 2017 Transitions
Fig. 2: Analytics, Data Science, Machine Learning Platforms
Transitions between R, Python, Both, and Other from 2016 to 2017

This chart looks complicated, but we see two key aspects, and Python wins on both:

  • Loyalty: Python users are more loyal, with 91% of 2016 Python users staying with Python. Only 74% of R users stayed, and 60% of other platforms users did.
  • Switching: Only 5% of Python users moved to R, while twice as many - 10% of R users moved to Python. Among those who used both in 2016, only 49% kept using both, 38% moved to Python, and 11% moved to R.

Net we look at trends across multiple years.

In our 2015 Poll on R vs Python we did not offer an option for "Both Python and R", so to compare trends across 4 years, we replace the shares of Python and R in 2016 and 2017 by
Python* = (Python share) + 50% of (Both Python and R)
R* = (R share) + 50% of (Both Python and R)

We see that share of R usage is slowly declining (from about 50% in 2015 to 36% in 2017), while Python share is steadily growing - from 23% in 2014 to 47% in 2017. The share of other platforms is also steadily declining.

Python R Other 2014 17 Trends
Fig. 3: Python vs R vs Other platforms for Analytics, Data Science, and Machine Learning, 2014-17

Finally, we look at trends and patterns by region. The regional participation was:

  • US/Canada, 40%
  • Europe, 35%
  • Asia, 12.5%
  • Latin America, 6.2%
  • Africa/Middle East, 3.6%
  • Australia/NZ, 3.1%

To simplify the chart we split "Both" votes among R and Python, as above, and also combine 4 regions with smaller participation of Asia, AU/NZ, Latin America, and Africa/Middle East into one "Rest" region.

Python R Other Region 2016 2017
Fig. 4: Python* vs R* vs Rest by Region, 2016 vs 2017

We observe the same pattern across all regions:

  • increase in Python share, by 8-10%
  • decline in R share, by about 2-4%
  • decline in other platforms, by 5-7%

The future looks bright for Python users, but we expect that R and other platforms will retain some share in the foreseeable future because of their large embedded base.

Read me...

Languages for Data Science

Very often the question about what programming language is best for data science work. The answer may depend on who you ask, there are many options out there and they all have their advantages and disadvantages. Here are some thoughts from Peter Gleeson on this matter:

While there is no correct answer, there are several things to take into consideration. Your success as a data scientist will depend on many points, including:


When it comes to advanced data science, you will only get so far reinventing the wheel each time. Learn to master the various packages and modules offered in your chosen language. The extent to which this is possible depends on what domain-specific packages are available to you in the first place!


A top data scientist will have good all-round programming skills as well as the ability to crunch numbers. Much of the day-to-day work in data science revolves around sourcing and processing raw data or ‘data cleaning’. For this, no amount of fancy machine learning packages are going to help.


In the often fast-paced world of commercial data science, there is much to be said for getting the job done quickly. However, this is what enables technical debt to creep in — and only with sensible practices can this be minimized.


In some cases it is vital to optimize the performance of your code, especially when dealing with large volumes of mission-critical data. Compiled languages are typically much faster than interpreted ones; likewise statically typed languages are considerably more fail-proof than dynamically typed. The obvious trade-off is against productivity.

To some extent, these can be seen as a pair of axes (Generality-Specificity, Performance-Productivity). Each of the languages below fall somewhere on these spectra.

With these core principles in mind, let’s take a look at some of the more popular languages used in data science. What follows is a combination of research and personal experience of myself, friends and colleagues — but it is by no means definitive! In approximately order of popularity, here goes:


What you need to know

Released in 1995 as a direct descendant of the older S programming language, R has since gone from strength to strength. Written in C, Fortran and itself, the project is currently supported by the R Foundation for Statistical Computing.




  • Excellent range of high-quality, domain specific and open source packages. R has a package for almost every quantitative and statistical application imaginable. This includes neural networks, non-linear regression, phylogenetics, advanced plotting and many, many others.
  • The base installation comes with very comprehensive, in-built statistical functions and methods. R also handles matrix algebra particularly well.
  • Data visualization is a key strength with the use of libraries such as ggplot2.


  • Performance. There’s no two ways about it, R is not a quick language.
  • Domain specificity. R is fantastic for statistics and data science purposes. But less so for general purpose programming.
  • Quirks. R has a few unusual features that might catch out programmers experienced with other languages. For instance: indexing from 1, using multiple assignment operators, unconventional data structures.

Verdict — “brilliant at what it’s designed for”

R is a powerful language that excels at a huge variety of statistical and data visualization applications, and being open source allows for a very active community of contributors. Its recent growth in popularity is a testament to how effective it is at what it does.


What you need to know

Guido van Rossum introduced Python back in 1991. It has since become an extremely popular general purpose language, and is widely used within the data science community. The major versions are currently 3.6 and 2.7.




  • Python is a very popular, mainstream general purpose programming language. It has an extensive range of purpose-built modules and community support. Many online services provide a Python API.
  • Python is an easy language to learn. The low barrier to entry makes it an ideal first language for those new to programming.
  • Packages such as pandas, scikit-learn and Tensorflow make Python a solid option for advanced machine learning applications.


  • Type safety: Python is a dynamically typed language, which means you must show due care. Type errors (such as passing a String as an argument to a method which expects an Integer) are to be expected from time-to-time.
  • For specific statistical and data analysis purposes, R’s vast range of packages gives it a slight edge over Python. For general purpose languages, there are faster and safer alternatives to Python.

Verdict — “excellent all-rounder”

Python is a very good choice of language for data science, and not just at entry-level. Much of the data science process revolves around the ETL process (extraction-transformation-loading). This makes Python’s generality ideally suited. Libraries such as Google’s Tensorflow make Python a very exciting language to work in for machine learning.


What you need to know

SQL (‘Structured Query Language’) defines, manages and queries relational databases. The language appeared by 1974 and has since undergone many implementations, but the core principles remain the same.


Varies — some implementations are free, others proprietary


  • Very efficient at querying, updating and manipulating relational databases.
  • Declarative syntax makes SQL an often very readable language . There’s no ambiguity about what SELECT name FROM users WHERE age > 18 is supposed to do!
  • SQL is very used across a range of applications, making it a very useful language to be familiar with. Modules such as SQLAlchemy make integrating SQL with other languages straightforward.


  • SQL’s analytical capabilities are rather limited — beyond aggregating and summing, counting and averaging data, your options are limited.
  • For programmers coming from an imperative background, SQL’s declarative syntax can present a learning curve.
  • There are many different implementations of SQL such as PostgreSQL, SQLite, MariaDB . They are all different enough to make inter-operability something of a headache.

Verdict — “timeless and efficient”

SQL is more useful as a data processing language than as an advanced analytical tool. Yet so much of the data science process hinges upon ETL, and SQL’s longevity and efficiency are proof that it is a very useful language for the modern data scientist to know.


What you need to know

Java is an extremely popular, general purpose language which runs on the (JVM) Java Virtual Machine. It’s an abstract computing system that enables seamless portability between platforms. Currently supported by Oracle Corporation.


Version 8 — Free! Legacy versions, proprietary.


  • Ubiquity . Many modern systems and applications are built upon a Java back-end. The ability to integrate data science methods directly into the existing codebase is a powerful one to have.
  • Strongly typed. Java is no-nonsense when it comes to ensuring type safety. For mission-critical big data applications, this is invaluable.
  • Java is a high-performance, general purpose, compiled language . This makes it suitable for writing efficient ETL production code and computationally intensive machine learning algorithms.


  • For ad-hoc analyses and more dedicated statistical applications, Java’s verbosity makes it an unlikely first choice. Dynamically typed scripting languages such as R and Python lend themselves to much greater productivity.
  • Compared to domain-specific languages like R, there aren’t a great number of libraries available for advanced statistical methods in Java.

Verdict — “a serious contender for data science”

There is a lot to be said for learning Java as a first choice data science language. Many companies will appreciate the ability to seamlessly integrate data science production code directly into their existing codebase, and you will find Java’s performance and and type safety are real advantages. However, you’ll be without the range of stats-specific packages available to other languages. That said, definitely one to consider — especially if you already know one of R and/or Python.


What you need to know

Developed by Martin Odersky and released in 2004, Scala is a language which runs on the JVM. It is a multi-paradigm language, enabling both object-oriented and functional approaches. Cluster computing framework Apache Spark is written in Scala.




  • Scala + Spark = High performance cluster computing. Scala is an ideal choice of language for those working with high-volume data sets.
  • Multi-paradigmatic: Scala programmers can have the best of both worlds. Both object-oriented and functional programming paradigms available to them.
  • Scala is compiled to Java bytecode and runs on a JVM. This allows inter-operability with the Java language itself, making Scala a very powerful general purpose language, while also being well-suited for data science.


  • Scala is not a straightforward language to get up and running with if you’re just starting out. Your best bet is to download sbt and set up an IDE such as Eclipse or IntelliJ with a specific Scala plug-in.
  • The syntax and type system are often described as complex. This makes for a steep learning curve for those coming from dynamic languages such as Python.

Verdict — “perfect, for suitably big data”

When it comes to using cluster computing to work with Big Data, then Scala + Spark are fantastic solutions. If you have experience with Java and other statically typed languages, you’ll appreciate these features of Scala too. Yet if your application doesn’t deal with the volumes of data that justify the added complexity of Scala, you will likely find your productivity being much higher using other languages such as R or Python.


What you need to know

Released just over 5 years ago, Julia has made an impression in the world of numerical computing. Its profile was raised thanks to early adoption by several major organizationsincluding many in the finance industry.




  • Julia is a JIT (‘just-in-time’) compiled language, which lets it offer good performance. It also offers the simplicity, dynamic-typing and scripting capabilities of an interpreted language like Python.
  • Julia was purpose-designed for numerical analysis. It is capable of general purpose programming as well.
  • Readability. Many users of the language cite this as a key advantage


  • Maturity. As a new language, some Julia users have experienced instability when using packages. But the core language itself is reportedly stable enough for production use.
  • Limited packages are another consequence of the language’s youthfulness and small development community. Unlike long-established R and Python, Julia doesn’t have the choice of packages (yet).

Verdict — “one for the future”

The main issue with Julia is one that cannot be blamed for. As a recently developed language, it isn’t as mature or production-ready as its main alternatives Python and R. But, if you are willing to be patient, there’s every reason to pay close attention as the language evolves in the coming years.


What you need to know

MATLAB is an established numerical computing language used throughout academia and industry. It is developed and licensed by MathWorks, a company established in 1984 to commercialize the software.


Proprietary — pricing varies depending on your use case


  • Designed for numerical computing. MATLAB is well-suited for quantitative applications with sophisticated mathematical requirements such as signal processing, Fourier transforms, matrix algebra and image processing.
  • Data Visualization. MATLAB has some great inbuilt plotting capabilities.
  • MATLAB is often taught as part of many undergraduate courses in quantitative subjects such as Physics, Engineering and Applied Mathematics. As a consequence, it is widely used within these fields.


  • Proprietary licence. Depending on your use-case (academic, personal or enterprise) you may have to fork out for a pricey licence. There are free alternatives available such as Octave. This is something you should give real consideration to.
  • MATLAB isn’t an obvious choice for general-purpose programming.

Verdict — “best for mathematically intensive applications”

MATLAB’s widespread use in a range of quantitative and numerical fields throughout industry and academia makes it a serious option for data science. The clear use-case would be when your application or day-to-day role requires intensive, advanced mathematical functionality; indeed, MATLAB was specifically designed for this.

Other Languages

There are other mainstream languages that may or may not be of interest to data scientists. This section provides a quick overview… with plenty of room for debate of course!


C++ is not a common choice for data science, although it has lightning fast performance and widespread mainstream popularity. The simple reason may be a question of productivity versus performance.

As one Quora user puts it:

“If you’re writing code to do some ad-hoc analysis that will probably only be run one time, would you rather spend 30 minutes writing a program that will run in 10 seconds, or 10 minutes writing a program that will run in 1 minute?”

The dude’s got a point. Yet for serious production-level performance, C++ would be an excellent choice for implementing machine learning algorithms optimized at a low-level.

Verdict — “not for day-to-day work, but if performance is critical…”

Read me...

Probably more likely than probable - Reblog

This is a reblog from here Probably more likely than probable // Revolutions

What kind of probability are people talking about when they say something is "highly likely" or has "almost no chance"? The chart below, created by Reddit user zonination, visualizes the responses of 46 other Reddit users to "What probability would you assign to the phase: <phrase>" for various statements of probability. Each set of responses has been converted to a kernel destiny estimate and presented as a joyplot using R.


Somewhat surprisingly, the results from the Redditors hew quite closely to a similar study of 23 NATO intelligence officers in 2007. In that study, the officers — who were accustomed to reading intelligence reports with assertions of likelihood — were giving a similar task with the same descriptions of probability. The results, here presented as a dotplot, are quite similar.


For details on the analysis of the Redditors, including the data and R code behind the joyplot chart, check out the Github repository linked below.

Github (zonination): Perceptions of Probability and Numbers

Read me...

Another "Data Science and Analytics with Python" Delivered

Another "Data Science and Analytics with Python" Delivered. Thanks for sharing the picture Dave Groves.

Read me...

Data Science and Analytics with Python already being suggested!

"Data Science and Analytics with Python" was published yesterday and now it is already appearing as a suggested book for related titles.

You can find it with the link above or in Amazon here.



Read me...

"Data Science and Analytics with Python" is published

Very pleased to see that finally the publication of my "Data Science and Analytics with Python" book has arrived.

Read me...

Final version of "Data Science and Analytics with Python" approved

It has been a long road, one filled with unicorns and Jackalopes, decision trees and random forests, variance and bias, cats and dogs, and targets and features.

Well over a year ago, the idea of writing another book seemed like a farfetched proposition. Writing the book came about from the work that I have been doing in the area as well as from discussions with my colleagues and students, including also practitioners and beneficiaries of data science and analytics.

It is my sincere hope that the book is useful to those coming afresh to this new field as well as to those more seasoned data scientists.

This afternoon I had the pleasure of approving the final version of the book that will be sent to the printers in the next few days.

Once the book is available you can get a copy directly with CRC Press or from Amazon.



Read me...

Data Science & Augmented Intelligence - Reblog from "Data Science: a new discipline to change the world" by Alan Wilson

This is a reblog of the post by Alan Wilson that appeared in the EPSRC blog. You can see the original here.


Data science - the new kid on the block

I have re-badged myself several times in my research career: mathematician, theoretical physicist, economist (of sorts), geographer, city planner, complexity scientist, and now data scientist. This is partly personal idiosyncrasy but also a reflection of how new interdisciplinary research challenges emerge. I now have the privilege of being the Chief Executive of The Alan Turing Institute - the national centre for data science. 'Data science' is the new kid on the block. How come?

First, there is an enormous amount of new 'big' data; second, this has had a powerful impact on all the sciences; and thirdly, on society, the economy and our way of life. Data science represents these combinations. The data comes from wide-spread digitisation combined with the 'open data' initiatives of government and extensive deployment of sensors and devices such as mobile phones. This generates huge research opportunities.

In broad terms, data science has two main branches. First, what can we do with the data? Applications of statistics and machine learning fall under this branch. Second, how can we transform existing science with this data and these methods? Much of the second is rooted in mathematics. To make this work in practice, there is a time-consuming first step: making the data useable by combining different sources in different formats. This is known as 'data wrangling', which coincidentally is the subject of a new Turing research project to speed up this time-consuming process. The whole field is driven by the power of the computer, and computer science. Understanding the effects of data on society, and the ethical questions it provokes, is led by the social sciences.

All of this combines in the idea of artificial intelligence, or AI. While the 'machine' has not yet passed the 'Turing test' and cannot compete with humans in thought, in many applications AI and data science now support human decision making. The current buzz phrase for this is 'augmented intelligence'.

Cross-disciplinary potential

I can illustrate the research potential of data science through two examples, the first from my own field of urban research; the second from medicine - with recent AI research in this field learned, no doubt imperfectly, from my Turing colleague Mihaela van der Schaar.

There is a long history of developing mathematical and computer models of cities. Data arrives very slowly for model calibration - the census, for example, is critical. A combination of open government data and real-time flows from mobile phones and social media networks has changed this situation: real-time calibration is now possible. This potentially transforms both the science and its application in city planning. Machine learning complements, and potentially integrates with, the models. Data science in this case adds to an existing deep knowledge base.

Medical diagnosis is also underpinned by existing knowledge - physiology, cell and molecular biology for example. It is a skilled business, interpreting symptoms and tests. This can be enhanced through data science techniques - beginning with advances in imaging and visualisation and then the application of machine learning to the variety of evidence available. The clinician can add his or her own judgement. Treatment plans follow. At this point, something really new kicks in. 'Live' data on patients, including their responses to treatment, becomes available. This data can be combined with personal data to derive clusters of 'like' patients, enabling the exploration of the effectiveness of different treatment plans for different types of patients. This combination of data science techniques and human decision making is an excellent example of augmented intelligence. This opens the way to personalised intelligent medicine, which is set to have a transformative effect on healthcare (for those interested in finding out more, reserve a place for Mihaela van der Schaar's Turing Lecture on 4 May).

An exciting new agenda

These kinds of developments of data science, and the associated applications, are possible in almost all sectors of industry. It is the role of the Alan Turing Institute to explore both the fundamental science underpinnings, and the potential applications, of data science across this wide landscape.

We currently work in fields as diverse as digital engineering, defence and security, computer technology and finance as well as cities and health. This range will expand as this very new Institute grows. We will work with and through universities and with commercial, public and third sector partners, to generate and develop the fruits of data science. This is a challenging agenda but a hugely exciting one.

Read me...

Listening to O'Reilly Data Show - with Aurélien Géron

Listening to O'Reilly Data Show - O'Reilly Media Podcast (Becoming a machine learning engineer):

The O’Reilly Data Show Podcast: Aurélien Géron on enabling companies to use machine learning in real-world products.

In this episode of the Data Show, I spoke with, a serial entrepreneur, data scientist, and author of a popular, new book entitled Hands-on Machine Learning with Scikit-Learn and TensorFlow. Géron’s book is aimed at software engineers who want to learn machine learning and start deploying machine learning models in real-world products.

As more companies adopt big data and data science technologies, there is an emerging cohort of individuals who have strong software engineering skills and are experienced using machine learning and statistical techniques. The need to build data products has given rise to what many are calling “machine learning engineers”: individuals who can work on both data science prototypes and production systems.

Géron is finding strong demand for his services as a consulting machine learning engineer, and he hopes his new book will be an important resource for those who want to enter the field.

Here are some highlights from our conversation:

From product manager to machine learning engineer

I decided to join Google. They offered me a job as the lead product manager of YouTube's video classification team. The goal is to create a system that can automatically find out what each video is about. Google has a huge knowledge graphfor hundreds of millions of topics in it, and the goal is to actually connect each video with all the topics in the knowledge graph covered in the video.

... I was a product manager, and I had always been a software engineer. I felt a little bit far from the technical aspects; I wanted to code again. That was the first thing. The second thing is, TensorFlow came out and there was a lot of communication internally at Google. I began using TensorFlow, and loved it. I knew TensorFlow would become popular, and I felt it would make for a good book.

Writing a machine learning book for engineers

I had gone through all the classes I could; there are internal classes at Google for learning machine learning, and they had great teachers there. I also learned as much as I could from books, from Andrew Ng's Coursera class, and everything you can think of to learn machine learning. I was a bit frustrated by the books. The books are really good, but a lot of them are from researchers and they don't feel hands-on. I'm a software engineer; I wanted to code. That's when I decided that I wanted to write a book about TensorFlow that was really hands-on, with examples of code and things that engineers would pick up and start using right away. The other thing is that while there were a few books targeted at engineers, they really stayed as far away from the underlying math as possible. In addition, many of the existing books relied on toy functions, toy examples of code, and that was also a bit frustrating because I wanted to have production-ready code. That's how the idea grew: write a book about TensorFlow for engineers, with production-ready examples.

Business metrics are distinct from machine learning metrics

You can spend months tuning a great classifier that will detect with 98% precision a particular set of topics, but then you launch it and it really doesn't affect your business metrics whatsoever.

The first step is to really understand what the business metrics, or objectives, are. How are you going to measure them? Then, go and see if you have a chance at improving things. An interesting technique is to try to manually achieve the task. Have a human try to achieve the task and see if that has an impact. It's not always possible, but if you can do that, it might be worth spending months building an architecture to do it automatically. If a human cannot improve things, it might be challenging for a machine to do better. It might still be possible, but it might be tougher.

Make sure you know what the business objective is and never to lose track of it. I've seen people start improving models, but they don't really have metrics to see whether or not things have improved. It sounds stupid but one of the very first things you need to do is to make sure you have clear metrics that everybody agrees on. It's very tempting to say, ‘I feel this architecture is going to work better’ and try to then work on it, but it hasn't improved anything because you're working without metrics.

Related resources:

Read me...

Machine Learning Explained - Video

"What is machine learning?" is a question a lot of us often encounter. From email filtering to recommendation engines, machine learning is used is many of our daily activities.

Here is a video from the Oxford Sparks outreach program of the University of Oxford with a two-minute explanation. Enjoy


Read me...

One Reply to “Data Science”

%d bloggers like this: