## Life lessons from differential equations

Ten life lessons from differential equations:

1. Some problems simply have no solution.
2. Some problems have no simple solution.
3. Some problems have many solutions.
4. Determining that a solution exists may be half the work of finding it.
5. Solutions that work well locally may blow up when extended too far.
6. Boundary conditions are the hard part.
7. Something that starts out as a good solution may become a very bad solution.
8. You can fool yourself by constructing a solution where one doesn’t exist.
9. Expand your possibilities to find a solution, then reduce them to see how good the solution is.
10. You can sometimes do what sounds impossible by reframing your problem.

## Working with data and approaching data-based competitions

We are getting close to the end my 11-week Data Science class at General Assembly. As in the previous time, I had a whale of a time talking to people who are genuinely interested in data, analytics, science and models. Some of the projects this time have been Kaggle competitions. This has brought some advantages as the data is readily available, but other challenges do arise. It is effectively a game of whack-a-mole, right? Some times data is masked, or hashed, there my be too much of it or limited information.

In any case, the fact that you can submit your predictions and be ranked among other competitors does raise the question as to how (and more importantly why) do you gain and extra basis point in your score? In some cases this may indeed be important, but my view here is that since “all models are wrong” the truly important thing is to ask how comfortable are you with the score obtained and whether your business or application is resilient to that kind of error (think airplane safety versus  ice-cream flavour choices). This discussion reminded me of a recent episode of Talking Machines, a podcast about machine learning that I recommended to you readers some time ago.

In episode 13 of Talking Machines Katherine Gorman and Ryan Adams interviewed Claudia Perlich, the Chief Data Scientist at DStillery. Claudia has won a number of competitions. She was trying to avoid talking about the subject, and I am glad the interviewers stirred the conversation that way. Her secret to winning a large number of competitions according to her is that she “finds something wrong with the data”. She explains that she likes getting intimately familiar with the data and often she comes across something that should not be there and can thus be exploited.

She talked about a particular breast cancer data model competition where they build the most predictive model “not because we understand medicine” she explained, “but because we realised that the patient identifier, which was just a random number, was by far the most predictive feature”. The story behind that dataset is that it was compiled from different data sources, some were from screening centres, others from treatment centres. As such, she explains “the base rate, i.e. the natural percentage of the population that was affected was very different and you could back this out from the patient identifier”. If they had been explicit about this, then the modelling would have been treated differently. I particularly like the fact that she highlights that these exploits are of importance in a competition environment but not in “real applications”.

When asked about her approach to finding these exploits, she explains that she looks at the data “in the screen, like the matrix, you have these things flashing down and what works very well for me is a certain expectation or intuition of what you should be seeing”. As an example Claudia mentions that things that should not be sorted and appear sorted in the dataset may be an indication of manipulation. Another example is, features that should be numeric but you see certain values that appear over and over again for no apparent reason typically means that someone for instance replaced a missing value with the average or the median. A practical tip she offers is that if a nearest-neighbour model performs better than other algorithms is an indication of potential duplicates in the dataset!

As I was explaining to one of the guys in the course, a lot of the times it is not just having tools and models at your disposal, but experience with their use and outcomes is very important too. I was glad to hear Claudia echo those thoughts! “There is no grand theory behind it, no recommended toolset” – she says. After all, she has been quoted as saying that:

There is no clean or dirty data, just data you don’t understand

Like me, Claudia dislikes when someone else cleans data on my behalf, as that creates sometimes more issues as in many cases assumptions about the data are made prior to its usage. That is not to say that you do not need to manipulate your data, but at least you know what transformations you have applied to it and the assumptions you have made.

I highly recommend that you listen to the podcast, it is a very good and informative episode. You can do so here.

## n sweets in a bag, some are orange…

The other day in the news there was a note about a particular question in one of the national curriculum exams… I thought it was a bit of an odd thing for a maths question to feature in the news and so I thought of having a look a the question. Here it is:

There are $n$ sweets in a bag.

6 of the sweets are orange.

The rest of the sweets are yellow.

Hannah takes at random a sweet form the bag. She eats the sweet.

Hannah then takes at random another sweet from the bag. She eats the sweet.

The probability that Hanna eats two orange sweets is $\frac{1}{3}.$

a) Show that $n^2-n-90=0$

It sounds like an odd question, but after giving it a bit of thought it is actually quite straightforward; and I am glad they ask something that makes you think, rather than something that is purely a mechanical calculation.

So, let’s take a look: Hannah is taking sweets from the bag at random and without replacement (she eats the sweets after all). So we are told that there are 6 orange sweets, so at the beginning of the sweet-eating binge, the probability of picking an orange sweet is:

$\displaystyle P(\text{1 orange sweet})=\frac{6}{n}$.

Hannah eats the sweet, remember… so in the second go at the sweets, the probability of an orange sweet is now:

$\displaystyle P(\text{2nd orange sweet})=\frac{5}{n-1}$.

Now, they tell us that the probability of eating two orange sweets is $\frac{1}{3}$, so we have that:

$\displaystyle \left( \frac{6}{n} \right)\left( \frac{5}{n-1} \right)=\frac{1}{3}$,

$\displaystyle \frac{30}{n^2-n}=\frac{1}{3}$,

$\displaystyle n^2-n=90$,

which is the expression we were looking for. Furthermore, you can then solve this quadratic equation to find that the total number of sweets in the bag is 10.

The only thing we don’t know is if the sweets are just orange in colour, or also in flavour! We will have to ask Hannah!

## Shelf Life – The Tiniest Fossils

Really thrilled to continue seeing the American Museum of Natural History series Shelf Life. I blogged about this series earlier on in the year and they have kept to their word with interesting and unique instalments.

In Episode 6 we get to hear about micropaleontology, the study of fossil specimens that are so tiny you cannot see them with the naked eye. The scientist and researchers tell us about foramnifera, unicellular organisms belonging to the kingdom Protista and which go back to about 65 million years. In spite of being unicellular, they make shells! And this is indeed what makes it possible for them to become fossilised.

Interestingly enough these fossils allow us to used them as ways to tell something about ancient climate data. As Roberto Moncada pointed out to me:

According to our expert in the piece, basically every representational graph you’ve ever seen of climate/temperatures from the Earth’s past is derived from analyzing these tiny little creatures.

The Tiniest Fossils are indeed among the most important for climate research!

## Markup for Fast Data Science Publication – Reblog

I am an avid user of Markdown via Mou and R Markdown (with RStudio). The facility that the iPython Notebook offers in combining code and text to be rendered in an interactive webpage is the choice for a number of things, including the 11-week Data Science course I teach at General Assembly.

As for LaTeX, well, I could not have survived my PhD without it and I still use it heavily. I have even created some videos about how to use LaTeX, you can take a loot at them

My book “Essential Matlab and Octave” was written and formatted in its entirety using LaTeX. My new book “Data Science and Analytics with Python” is having the same treatment.

I was very pleased to see the following blog post by Benjamin Bengfort. This is a reblog of that post and the original can be found here.

Markup for Fast Data Science Publication
Benjamin Bengfort

A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. — Carl Sagan in Billions & Billions: Thoughts on Life and Death at the Brink of the Millennium

As data scientists, it’s easy to get bogged down in the details. We’re busy implementing Python and R code to extract valuable insights from data, train effective machine learning models, or put a distributed computation system together. Many of these tasks, especially those relating to data ingestion or wrangling, are time-consuming but are the bread and butter of the data scientist’s daily grind. What we often forget, however, is that we must not only be data engineers, but also contributors to the data science corpus of knowledge.

If a data product derives its value from data and generates more data in return, then a data scientist derives their value from previously published works and should generate more publications in return. Indeed, one of the reasons that Machine Learning has grown ubiquitous (see the many Python-tagged questions related to ML on Stack Overflow) is thanks to meticulous blog posts and tools from scientific research (e.g. Scikit-Learn) that enable the rapid implementation of a variety of algorithms. Google in particular has driven the growth of data products by publishing systems papers about their methodologies, enabling the creation of open source tools like Hadoop and Word2Vec.

By building on a firm base for both software and for modeling, we are able to achieve greater results, faster. Exploration, discussion, criticism, and experimentation all enable us to have new ideas, write better code, and implement better systems by tapping into the collective genius of a data community. Publishing is vitally important to keeping this data science gravy train on the tracks for the foreseeable future.

In academia, the phrase “publish or perish” describes the pressure to establish legitimacy through publications. Clearly, we don’t want to take our rule as authors that far, but the question remains, “How can we effectively build publishing into our workflow?” The answer is through markup languages – simple, streamlined markup that we can add to plain text documents that build into a publishing layout or format. For example, the following markup languages/platforms build into the accompanying publishable formats:

• Markdown → HTML
• iPython Notebook (JSON + Markdown) → Interactive Code
• reStructuredText + Sphinx → Python Documentation, ReadTheDocs.org
• AsciiDoc → ePub, Mobi, DocBook, PDF
• LaTeX → PDF

The great thing about markup languages is that they can be managed inline with your code workflow in the same software versioning repository. Github goes even further as to automatically render Markdown files! In this post, we’ll get you started with several markup and publication styles so that you can find what best fits into your workflow and deployment methodology.

Markdown

Markdown is the most ubiquitous of the markup languages we’ll describe in this post, and its simplicity means that it is often chosen for a variety of domains and applications, not just publishing. Markdown, originally created by John Gruber, is a text-to-HTML processor, where lightweight syntactic elements are used instead of the more heavyweight HTML tags. Markdown is intended for folks writing for the web, not designing for the web, and in some CMS systems, it is simply the way that you write, no fancy text editor required.

Markdown has seen special growth thanks to Github, which has an extended version of Markdown, usually referred to as “Github-Flavored Markdown.” This style of Markdown extends the basics of the original Markdown to include tables, syntax highlighting, and other inline formatting elements. If you create a Markdown file in Github, it is automatically rendered when viewing files on the web, and if you include a README.md in a directory, that file is rendered below the directory contents when browsing code. Github Issues are also expected to be in Markdown, further extended with tools like checkbox lists.

Markdown is used for so many applications it is difficult to name them all. Below are a select few that might prove useful to your publishing tasks.

• Jekyll allows you to create static websites that are built from posts and pages written in Markdown.
• Github Pages allows you to quickly publish Jekyll-generated static sites from a Github repository for free.
• Silvrback is a lightweight blogging platform that allows you to write in Markdown (this blog is hosted on Silvrback).
• Day One is a simple journaling app that allows you to write journal entries in Markdown.
• iPython Notebook expects Markdown to describe blocks of code.
• Stack Overflow expects questions, answers, and comments to be written in Markdown.
• MkDocs is a software documentation tool written in Markdown that can be hosted on ReadTheDocs.org.
• GitBook is a toolchain for publishing books written in Markdown to the web or as an eBook.

There are also a wide variety of editors, browser plugins, viewers, and tools available for Markdown. Both Sublime Text and Atom support Markdown and automatic preview, as well as most text editors you’ll use for coding. Mou is a desktop Markdown editor for Mac OSX and iA Writer is a distraction-free writing tool for Markdown for iOS. (Please comment your favorite tools for Windows and Android). For Chrome, extensions like Markdown Here make it easy to compose emails in Gmail via Markdown or Markdown Preview to view Markdown documents directly in the browser.

Clearly, Markdown enjoys a broad ecosystem and diverse usage. If you’re still writing HTML for anything other than templates, you’re definitely doing it wrong at this point! It’s also worth including Markdown rendering for your own projects if you have user submitted text (also great for text-processing).

Rendering Markdown can be accomplished with the Python Markdown library, usually combined with the Bleach library for sanitizing bad HTML and linkifying raw text. A simple demo of this is as follows:

First install markdown and bleach using pip:

$pip install markdown bleach Then create a markdown parsing function as follows: import bleach from markdown import markdown def htmlize(text): """ This helper method renders Markdown then uses Bleach to sanitize it as well as converting all links in text to actual anchor tags. """ text = bleach.clean(text, strip=True) # Clean the text by stripping bad HTML tags text = markdown(text) # Convert the markdown to HTML text = bleach.linkify(text) # Add links from the text and add nofollow to existing links return text  Given a markdown file test.md whose contents are as follows: # My Markdown Document For more information, search on [Google](http://www.google.com). _Grocery List:_ 1. Apples 2. Bananas 3. Oranges The following code: >>> with open('test.md', 'r') as f: ... print htmlize(f.read()) Will produce the following HTML output: <h1>My Markdown Document</h1> For more information, search on <a href="http://www.google.com" rel="nofollow">Google</a>. <em>Grocery List:</em> <ol> <li>Apples</li> <li>Bananas</li> <li>Oranges</li> </ol> Hopefully this brief example has also served as a demonstration of how Markdown and other markup languages work to render much simpler text with lightweight markup constructs into a larger publishing framework. Markdown itself is most often used for web publishing, so if you need to write HTML, then this is the choice for you! To learn more about Markdown syntax, please see Markdown Basics. iPython Notebook iPython Notebook is an web-based, interactive environment that combines Python code execution, text (marked up with Markdown), mathematics, graphs, and media into a single document. The motivation for iPython Notebook was purely scientific: How do you demonstrate or present your results in a repeatable fashion where others can understand the work you’ve done? By creating an interactive environment where code, graphics, mathematical formulas, and rich text are unified and executable, iPython Notebook gives a presentation layer to otherwise unreadable or inscrutable code. Although Markdown is a big part of iPython Notebook, it deserves a special mention because of how critical it is to the data science community. iPython Notebook is interesting because it combines both the presentation layer as well as the markup layer. When run as a server, usually locally, the notebook is editable, explorable (a tree view will present multiple notebook files), and executable – any code written in Python in the notebook can be evaluated and run using an interactive kernel in the background. Math formula written in LaTeX are rendered using MathJax. To enhance the delivery and shareability of these notebooks, the NBViewer allows you to share static notebooks from a Github repository. iPython Notebook comes with most scientific distributions of Python like Anaconda or Canopy, but it is also easy to install iPython with pip: $ pip install ipython

iPython itself is an enhanced interactive Python shell or REPL that extends the basic Python REPL with many advanced features, primarily allowing for a decoupled two-process model that enables the notebook. This process model essentially runs Python as a background kernel that receives execution instructions from clients and returns responses back to them.

To start an iPython notebook execute the following command:

$ipython notebook This will start a local server at http://127.0.0.1:8888 and automatically open your default browser to it. You’ll start in the “dashboard view”, which shows all of the notebooks available in the current working directory. Here you can create new notebooks and start to edit them. Notebooks are saved as .ipynb files in the local directory, a format called “Jupyter” that is simple JSON with a specific structure for representing each cell in the notebook. The Jupyter notebook files are easily reversioned via Git and Github since they are also plain text. To learn more about iPython Notebook, please see the iPython Notebook documentation. reStructuredText reStructuredText is an easy-to-read plaintext markup syntax specifically designed for use in Python docstrings or to generate Python documentation. In fact, the reStructuredText parser is a component of Docutils, an open-source text processing system that is used by Sphinx to generate intelligent and beautiful software documentation, in particular the native Python documentation. Python software has a long history of good documentation, particularly because of the idea that batteries should come included. And documentation is a very strong battery! PyPi, the Python Package Index, ensures that third party packages provide documentation, and that the documentation can be easily hosted online through Python Hosted. Because of the ease of use and ubiquity of the tools, Python programmers are known for having very consistently documented code; sometimes it’s hard to tell the standard library from third party modules! In How to Develop Quality Python Code, I mentioned that you should use Sphinx to generate documentation for your apps and libraries in a docs directory at the top-level. Generating reStructuredText documentation in a docs directory is fairly easy: $ mkdir docs
$cd docs$ sphinx-quickstart

The quickstart utility will ask you many questions to configure your documentation. Aside from the project name, author, and version (which you have to type in yourself), the defaults are fine. However, I do like to change a few things:

...
> todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y
> coverage: checks for documentation coverage (y/n) [n]: y
...
> mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y

Similar to iPython Notebook, reStructured text can render LaTeX syntax mathematical formulas. This utility will create a Makefile for you; to generate HTML documentation, simply run the following command in the docs directory:

\$ make html

The output will be built in the folder _build/html where you can open the index.html in your browser.

While hosting documentation on Python Hosted is a good choice, a better choice might be Read the Docs, a website that allows you to create, host, and browse documentation. One great part of Read the Docs is the stylesheet that they use; it’s more readable than older ones. Additionally, Read the Docs allows you to connect a Github repository so that whenever you push new code (and new documentation), it is automatically built and updated on the website. Read the Docs can even maintain different versions of documentation for different releases.

Note that even if you aren’t interested in the overhead of learning reStructuredText, you should use your newly found Markdown skills to ensure that you have good documentation hosted on Read the Docs. See MkDocs for document generation in Markdown that Read the Docs will render.

AsciiDoc

When writing longer publications, you’ll need a more expressive tool that is just as lightweight as Markdown but able to handle constructs that go beyond simple HTML, for example cross-references, chapter compilation, or multi-document build chains. Longer publications should also move beyond the web and be renderable as an eBook (ePub or Mobi formats) or for print layout, e.g. PDF. These requirements add more overhead, but simplify workflows for larger media publication.

Writing for O’Reilly, I discovered that I really enjoyed working in AsciiDoc – a lightweight markup syntax, very similar to Markdown, which renders to HTML or DocBook. DocBook is very important, because it can be post-processed into other presentation formats such as HTML, PDF, EPUB, DVI, MOBI, and more, making AsciiDoc an effective tool not only for web publishing but also print and book publishing. Most text editors have an AsciiDoc grammar for syntax highlighting, in particular sublime-asciidoc and Atom AsciiDoc Preview, which make writing AsciiDoc as easy as Markdown.

AsciiDoctor is an AsciiDoc-specific toolchain for building books and websites from AsciiDoc. The project connects the various AsciiDoc tools and allows a simple command-line interface as well as preview tools. AsciiDoctor is primarily used for HTML and eBook formats, but at the time of this writing there is a PDF renderer, which is in beta. Another interesting project of O’Reilly’s is Atlas, a system for push-button publishing that manages AsciiDoc using a Git repository and wraps editorial build processes, comments, and automatic editing in a web platform. I’d be remiss not to mention GitBook which provides a similar toolchain for publishing larger books, though with Markdown.

Editor’s Note: GitBook does support AsciiDoc.

LaTeX

If you’ve done any graduate work in the STEM degrees then you are probably already familiar with LaTeX to write and publish articles, reports, conference and journal papers, and books. LaTeX is not a simple markup language, to say the least, but it is effective. It is able to handle almost any publishing scenario you can throw at it, including (and in particular) rendering complex mathematical formulas correctly from a text markup language. Most data scientists still use LaTeX, using MathJax or the Daum Equation Editor, if only for the math.

If you’re going to be writing PDFs or reports, I can provide two primary tips for working with LaTeX. First consider cloud-based editing with Overleaf or ShareLaTeX, which allows you to collaborate and edit LaTeX documents similarly to Google Docs. Both of these systems have many of the classes and stylesheets already so that you don’t have to worry too much about the formatting, and instead just get down to writing. Additionally, they aggregate other tools like LaTeX templates and provide templates of their own for most document types.

My personal favorite workflow, however, is to use the Atom editor with the LaTeX package and the LaTeX grammar. When using Atom, you get very nice Git and Github integration – perfect for collaboration on larger documents. If you have a TeX distribution installed (and you will need to do that on your local system, no matter what), then you can automatically build your documents within Atom and view them in PDF preview.

A complete tutorial for learning LaTeX can be found at Text Formatting with LaTeX.

Conclusion

Software developers agree that testing and documentation is vital to the successful creation and deployment of applications. However, although Agile workflows are designed to ensure that documentation and testing are included in the software development lifecycle, too often testing and documentation is left to last, or forgotten. When managing a development project, team leads need to ensure that documentation and testing are part of the “definition of done.”

In the same way, writing is vital to the successful creation and deployment of data products, and is similarly left to last or forgotten. Through publication of our work and ideas, we open ourselves up to criticism, an effective methodology for testing ideas and discovering new ones. Similarly, by explicitly sharing our methods, we make it easier for others to build systems rapidly, and in return, write tutorials that help us better build our systems. And if we translate scientific papers into practical guides, we help to push science along as well.

Don’t get bogged down in the details of writing, however. Use simple, lightweight markup languages to include documentation alongside your projects. Collaborate with other authors and your team using version control systems, and use free tools to make your work widely available. All of this is possible becasue of lightweight markup languages, and the more profecient you are at including writing in your workflow, the easier it will be to share your ideas.

This post is particularly link-heavy with many references to tools and languages. For reference, here are my preferred guides for each of the Markup languages discussed:

Special thanks to Rebecca Bilbro for editing and contributing to this post. Without her, this would certainly have been much less readable!

Benjamin Bengfort

## 2015 – International Year of Light

2015 has been declared the International Year of Light (IYL 2015) and with me being an optics geek, well, it was difficult to resist to enter a post about it. The IYL 2015 is a global initiative adopted by the United Nations to raise awareness of how optical technologies promote sustainable development and provide solutions to worldwide challenges in areas such as energy, education, communications, health, and sustainability.

There will be a number of event and programs run throughout the year and the aim of many of them is to promote public and political understanding of the central role of light in the modern world while also celebrating noteworthy anniversaries in 2015 – from the first studies of optics 1000 years ago to discoveries in optical communications that power the Internet today.

You can find further information from the well-known OSA here and check out the International Year of Light Blog.

Here are some pictures I took a couple of years ago during CLEO Europe in relationship to the International Year of Light.

## March 2015 Total Solar Eclipse

I know it is a bit late, but with the moving of the blog and all that jazz, I did not have time to post this earlier. This is a video taken by Bob Forrest, a former Specialist Technician at Bayfordbury’s Observatory at the University of Hertfordshire. The video is of the Total Solar Eclipse in March 2015.

Enjoy…

## Shelf Life – A great project at the American Museum of Natural History

I am a geek, and proudly so, and as such I have been known to visit exhibitions at the excellent Natural History Museum and the Science Museum in London, the Field Museum in Chicago, or indeed the American Museum of Natural History (AMNH). As a matter of fact, in August I did go to the AMNH and had a great time. I particularly enjoyed the Hayden Planetarium, part of the Rose Center for Earth and Space, with its iconic glass cube encasing the spherical Space Theater.

I am always in awe at the enormous number of items in the collections of these museums, cataloguing human knowledge, from taxonomy and evolution to geology and astrophysics. I was thus really intrigued when Roberto Moncada, from the AMHN sent some information about the most recent project at the museum: Shelf Life.

The AMHN has a collection with over 33 million specimens an artefacts. As it is usually the case, some of these items tell us a story about the state of knowledge at different points in human history and they range from the rare and irreplaceable to the amazing and precious. In the Shelf Life project, the museum keeps at heart its mission to share their collections and educate the public about the work that they do with the help of videos released monthly over the next year. In Episode 1, they take us inside the museum collections: “from centuries-old specimens to entirely new types of specialized collections like frozen tissues and genomic data”. Episode 2 they talk to us about the art of the science of classification, taxonomy, and they way in which 33 million (plus) items get organised in the collection of the museum. Go, have a look at their shelves, you will surely find something of interest among those 33 million items!

## Chris Hadfield event at the Royal Geographical Society

Chris Hadfield is speaking at the Royal Geographical Society in London as part of the Guardian Live events. I managed to get a couple of great seats to hear him speak about his book “You are here”. Looking forward to seeing the images he captured while at the ISS.