Random thoughts about random subjects… From science to literature and between manga and watercolours, passing by data science and rugby; including film, physics and fiction, programming, pictures and puns.
We are getting close to the end my 11-week Data Science class at General Assembly. As in the previous time, I had a whale of a time talking to people who are genuinely interested in data, analytics, science and models. Some of the projects this time have been Kaggle competitions. This has brought some advantages as the data is readily available, but other challenges do arise. It is effectively a game of whack-a-mole, right? Some times data is masked, or hashed, there my be too much of it or limited information.
In any case, the fact that you can submit your predictions and be ranked among other competitors does raise the question as to how (and more importantly why) do you gain and extra basis point in your score? In some cases this may indeed be important, but my view here is that since “all models are wrong” the truly important thing is to ask how comfortable are you with the score obtained and whether your business or application is resilient to that kind of error (think airplane safety versus ice-cream flavour choices). This discussion reminded me of a recent episode of Talking Machines, a podcast about machine learning that I recommended to you readers some time ago.
In episode 13 of Talking Machines Katherine Gorman and Ryan Adams interviewed Claudia Perlich, the Chief Data Scientist at DStillery. Claudia has won a number of competitions. She was trying to avoid talking about the subject, and I am glad the interviewers stirred the conversation that way. Her secret to winning a large number of competitions according to her is that she “finds something wrong with the data”. She explains that she likes getting intimately familiar with the data and often she comes across something that should not be there and can thus be exploited.
She talked about a particular breast cancer data model competition where they build the most predictive model “not because we understand medicine” she explained, “but because we realised that the patient identifier, which was just a random number, was by far the most predictive feature”. The story behind that dataset is that it was compiled from different data sources, some were from screening centres, others from treatment centres. As such, she explains “the base rate, i.e. the natural percentage of the population that was affected was very different and you could back this out from the patient identifier”. If they had been explicit about this, then the modelling would have been treated differently. I particularly like the fact that she highlights that these exploits are of importance in a competition environment but not in “real applications”.
When asked about her approach to finding these exploits, she explains that she looks at the data “in the screen, like the matrix, you have these things flashing down and what works very well for me is a certain expectation or intuition of what you should be seeing”. As an example Claudia mentions that things that should not be sorted and appear sorted in the dataset may be an indication of manipulation. Another example is, features that should be numeric but you see certain values that appear over and over again for no apparent reason typically means that someone for instance replaced a missing value with the average or the median. A practical tip she offers is that if a nearest-neighbour model performs better than other algorithms is an indication of potential duplicates in the dataset!
As I was explaining to one of the guys in the course, a lot of the times it is not just having tools and models at your disposal, but experience with their use and outcomes is very important too. I was glad to hear Claudia echo those thoughts! “There is no grand theory behind it, no recommended toolset” – she says. After all, she has been quoted as saying that:
There is no clean or dirty data, just data you don’t understand
Like me, Claudia dislikes when someone else cleans data on my behalf, as that creates sometimes more issues as in many cases assumptions about the data are made prior to its usage. That is not to say that you do not need to manipulate your data, but at least you know what transformations you have applied to it and the assumptions you have made.
I highly recommend that you listen to the podcast, it is a very good and informative episode. You can do so here.
As data scientists, it’s easy to get bogged down in the details. We’re busy implementing Python and R code to extract valuable insights from data, train effective machine learning models, or put a distributed computation system together. Many of these tasks, especially those relating to data ingestion or wrangling, are time-consuming but are the bread and butter of the data scientist’s daily grind. What we often forget, however, is that we must not only be data engineers, but also contributors to the data science corpus of knowledge.
If a data product derives its value from data and generates more data in return, then a data scientist derives their value from previously published works and should generate more publications in return. Indeed, one of the reasons that Machine Learning has grown ubiquitous (see the many Python-tagged questions related to ML on Stack Overflow) is thanks to meticulous blog posts and tools from scientific research (e.g. Scikit-Learn) that enable the rapid implementation of a variety of algorithms. Google in particular has driven the growth of data products by publishing systems papers about their methodologies, enabling the creation of open source tools like Hadoop and Word2Vec.
By building on a firm base for both software and for modeling, we are able to achieve greater results, faster. Exploration, discussion, criticism, and experimentation all enable us to have new ideas, write better code, and implement better systems by tapping into the collective genius of a data community. Publishing is vitally important to keeping this data science gravy train on the tracks for the foreseeable future.
In academia, the phrase “publish or perish” describes the pressure to establish legitimacy through publications. Clearly, we don’t want to take our rule as authors that far, but the question remains, “How can we effectively build publishing into our workflow?” The answer is through markup languages – simple, streamlined markup that we can add to plain text documents that build into a publishing layout or format. For example, the following markup languages/platforms build into the accompanying publishable formats:
The great thing about markup languages is that they can be managed inline with your code workflow in the same software versioning repository. Github goes even further as to automatically render Markdown files! In this post, we’ll get you started with several markup and publication styles so that you can find what best fits into your workflow and deployment methodology.
Markdown is the most ubiquitous of the markup languages we’ll describe in this post, and its simplicity means that it is often chosen for a variety of domains and applications, not just publishing. Markdown, originally created by John Gruber, is a text-to-HTML processor, where lightweight syntactic elements are used instead of the more heavyweight HTML tags. Markdown is intended for folks writing for the web, not designing for the web, and in some CMS systems, it is simply the way that you write, no fancy text editor required.
Markdown has seen special growth thanks to Github, which has an extended version of Markdown, usually referred to as “Github-Flavored Markdown.” This style of Markdown extends the basics of the original Markdown to include tables, syntax highlighting, and other inline formatting elements. If you create a Markdown file in Github, it is automatically rendered when viewing files on the web, and if you include a README.md in a directory, that file is rendered below the directory contents when browsing code. Github Issues are also expected to be in Markdown, further extended with tools like checkbox lists.
Markdown is used for so many applications it is difficult to name them all. Below are a select few that might prove useful to your publishing tasks.
Jekyll allows you to create static websites that are built from posts and pages written in Markdown.
Github Pages allows you to quickly publish Jekyll-generated static sites from a Github repository for free.
Silvrback is a lightweight blogging platform that allows you to write in Markdown (this blog is hosted on Silvrback).
Day One is a simple journaling app that allows you to write journal entries in Markdown.
Stack Overflow expects questions, answers, and comments to be written in Markdown.
MkDocs is a software documentation tool written in Markdown that can be hosted on ReadTheDocs.org.
GitBook is a toolchain for publishing books written in Markdown to the web or as an eBook.
There are also a wide variety of editors, browser plugins, viewers, and tools available for Markdown. Both Sublime Text and Atom support Markdown and automatic preview, as well as most text editors you’ll use for coding. Mou is a desktop Markdown editor for Mac OSX and iA Writer is a distraction-free writing tool for Markdown for iOS. (Please comment your favorite tools for Windows and Android). For Chrome, extensions like Markdown Here make it easy to compose emails in Gmail via Markdown or Markdown Preview to view Markdown documents directly in the browser.
Clearly, Markdown enjoys a broad ecosystem and diverse usage. If you’re still writing HTML for anything other than templates, you’re definitely doing it wrong at this point! It’s also worth including Markdown rendering for your own projects if you have user submitted text (also great for text-processing).
Rendering Markdown can be accomplished with the Python Markdown library, usually combined with the Bleach library for sanitizing bad HTML and linkifying raw text. A simple demo of this is as follows:
First install markdown and bleach using pip:
$ pip install markdown bleach
Then create a markdown parsing function as follows:
from markdown import markdown
This helper method renders Markdown then uses Bleach to sanitize it as
well as converting all links in text to actual anchor tags.
text = bleach.clean(text, strip=True) # Clean the text by stripping bad HTML tags
text = markdown(text) # Convert the markdown to HTML
text = bleach.linkify(text) # Add links from the text and add nofollow to existing links
Given a markdown file test.md whose contents are as follows:
# My Markdown Document
For more information, search on [Google](http://www.google.com).
The following code:
>>> with open('test.md', 'r') as f:
... print htmlize(f.read())
Will produce the following HTML output:
<h1>My Markdown Document</h1>
For more information, search on <a href="http://www.google.com" rel="nofollow">Google</a>.
Hopefully this brief example has also served as a demonstration of how Markdown and other markup languages work to render much simpler text with lightweight markup constructs into a larger publishing framework. Markdown itself is most often used for web publishing, so if you need to write HTML, then this is the choice for you!
iPython Notebook is an web-based, interactive environment that combines Python code execution, text (marked up with Markdown), mathematics, graphs, and media into a single document. The motivation for iPython Notebook was purely scientific: How do you demonstrate or present your results in a repeatable fashion where others can understand the work you’ve done? By creating an interactive environment where code, graphics, mathematical formulas, and rich text are unified and executable, iPython Notebook gives a presentation layer to otherwise unreadable or inscrutable code. Although Markdown is a big part of iPython Notebook, it deserves a special mention because of how critical it is to the data science community.
iPython Notebook is interesting because it combines both the presentation layer as well as the markup layer. When run as a server, usually locally, the notebook is editable, explorable (a tree view will present multiple notebook files), and executable – any code written in Python in the notebook can be evaluated and run using an interactive kernel in the background. Math formula written in LaTeX are rendered using MathJax. To enhance the delivery and shareability of these notebooks, the NBViewer allows you to share static notebooks from a Github repository.
iPython Notebook comes with most scientific distributions of Python like Anaconda or Canopy, but it is also easy to install iPython with pip:
$ pip install ipython
iPython itself is an enhanced interactive Python shell or REPL that extends the basic Python REPL with many advanced features, primarily allowing for a decoupled two-process model that enables the notebook. This process model essentially runs Python as a background kernel that receives execution instructions from clients and returns responses back to them.
To start an iPython notebook execute the following command:
$ ipython notebook
This will start a local server at
and automatically open your default browser to it. You’ll start in the “dashboard view”, which shows all of the notebooks available in the current working directory. Here you can create new notebooks and start to edit them. Notebooks are saved as .ipynb files in the local directory, a format called “Jupyter” that is simple JSON with a specific structure for representing each cell in the notebook. The Jupyter notebook files are easily reversioned via Git and Github since they are also plain text.
reStructuredText is an easy-to-read plaintext markup syntax specifically designed for use in Python docstrings or to generate Python documentation. In fact, the reStructuredText parser is a component of Docutils, an open-source text processing system that is used by Sphinx to generate intelligent and beautiful software documentation, in particular the native Python documentation.
Python software has a long history of good documentation, particularly because of the idea that batteries should come included. And documentation is a very strong battery! PyPi, the Python Package Index, ensures that third party packages provide documentation, and that the documentation can be easily hosted online through Python Hosted. Because of the ease of use and ubiquity of the tools, Python programmers are known for having very consistently documented code; sometimes it’s hard to tell the standard library from third party modules!
In How to Develop Quality Python Code, I mentioned that you should use Sphinx to generate documentation for your apps and libraries in a docs directory at the top-level. Generating reStructuredText documentation in a docs directory is fairly easy:
$ mkdir docs
$ cd docs
The quickstart utility will ask you many questions to configure your documentation. Aside from the project name, author, and version (which you have to type in yourself), the defaults are fine. However, I do like to change a few things:
> todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y
> coverage: checks for documentation coverage (y/n) [n]: y
> mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y
Similar to iPython Notebook, reStructured text can render LaTeX syntax mathematical formulas. This utility will create a Makefile for you; to generate HTML documentation, simply run the following command in the docs directory:
$ make html
The output will be built in the folder _build/html where you can open the index.html in your browser.
While hosting documentation on Python Hosted is a good choice, a better choice might be Read the Docs, a website that allows you to create, host, and browse documentation. One great part of Read the Docs is the stylesheet that they use; it’s more readable than older ones. Additionally, Read the Docs allows you to connect a Github repository so that whenever you push new code (and new documentation), it is automatically built and updated on the website. Read the Docs can even maintain different versions of documentation for different releases.
Note that even if you aren’t interested in the overhead of learning reStructuredText, you should use your newly found Markdown skills to ensure that you have good documentation hosted on Read the Docs. See MkDocs for document generation in Markdown that Read the Docs will render.
When writing longer publications, you’ll need a more expressive tool that is just as lightweight as Markdown but able to handle constructs that go beyond simple HTML, for example cross-references, chapter compilation, or multi-document build chains. Longer publications should also move beyond the web and be renderable as an eBook (ePub or Mobi formats) or for print layout, e.g. PDF. These requirements add more overhead, but simplify workflows for larger media publication.
Writing for O’Reilly, I discovered that I really enjoyed working in AsciiDoc – a lightweight markup syntax, very similar to Markdown, which renders to HTML or DocBook. DocBook is very important, because it can be post-processed into other presentation formats such as HTML, PDF, EPUB, DVI, MOBI, and more, making AsciiDoc an effective tool not only for web publishing but also print and book publishing. Most text editors have an AsciiDoc grammar for syntax highlighting, in particular sublime-asciidoc and Atom AsciiDoc Preview, which make writing AsciiDoc as easy as Markdown.
AsciiDoctor is an AsciiDoc-specific toolchain for building books and websites from AsciiDoc. The project connects the various AsciiDoc tools and allows a simple command-line interface as well as preview tools. AsciiDoctor is primarily used for HTML and eBook formats, but at the time of this writing there is a PDF renderer, which is in beta. Another interesting project of O’Reilly’s is Atlas, a system for push-button publishing that manages AsciiDoc using a Git repository and wraps editorial build processes, comments, and automatic editing in a web platform. I’d be remiss not to mention GitBook which provides a similar toolchain for publishing larger books, though with Markdown.
If you’ve done any graduate work in the STEM degrees then you are probably already familiar with LaTeX to write and publish articles, reports, conference and journal papers, and books. LaTeX is not a simple markup language, to say the least, but it is effective. It is able to handle almost any publishing scenario you can throw at it, including (and in particular) rendering complex mathematical formulas correctly from a text markup language. Most data scientists still use LaTeX, using MathJax or the Daum Equation Editor, if only for the math.
If you’re going to be writing PDFs or reports, I can provide two primary tips for working with LaTeX. First consider cloud-based editing with Overleaf or ShareLaTeX, which allows you to collaborate and edit LaTeX documents similarly to Google Docs. Both of these systems have many of the classes and stylesheets already so that you don’t have to worry too much about the formatting, and instead just get down to writing. Additionally, they aggregate other tools like LaTeX templates and provide templates of their own for most document types.
My personal favorite workflow, however, is to use the Atom editor with the LaTeX package and the LaTeX grammar. When using Atom, you get very nice Git and Github integration – perfect for collaboration on larger documents. If you have a TeX distribution installed (and you will need to do that on your local system, no matter what), then you can automatically build your documents within Atom and view them in PDF preview.
Software developers agree that testing and documentation is vital to the successful creation and deployment of applications. However, although Agile workflows are designed to ensure that documentation and testing are included in the software development lifecycle, too often testing and documentation is left to last, or forgotten. When managing a development project, team leads need to ensure that documentation and testing are part of the “definition of done.”
In the same way, writing is vital to the successful creation and deployment of data products, and is similarly left to last or forgotten. Through publication of our work and ideas, we open ourselves up to criticism, an effective methodology for testing ideas and discovering new ones. Similarly, by explicitly sharing our methods, we make it easier for others to build systems rapidly, and in return, write tutorials that help us better build our systems. And if we translate scientific papers into practical guides, we help to push science along as well.
Don’t get bogged down in the details of writing, however. Use simple, lightweight markup languages to include documentation alongside your projects. Collaborate with other authors and your team using version control systems, and use free tools to make your work widely available. All of this is possible becasue of lightweight markup languages, and the more profecient you are at including writing in your workflow, the easier it will be to share your ideas.
This post is particularly link-heavy with many references to tools and languages. For reference, here are my preferred guides for each of the Markup languages discussed:
I was in the middle of an introductory workshop for Data Science at General Assembly and I was talking about using command line instructions to facilitate the manipulation of files and folders. We covered some of the usual ones such as ls, mv, mkdir, cat, more, less, etc. I was then going to demonstrate how easy it was to download a file from the command line using curl and I had prepared a small file uploaded to Dropbox and shortened its URL with bit.ly.
“So far so good” – I thought – and then proceeded with the demonstration… Only to find out that the command I was using was indeed downloading a file, but it was the only downloading the wrapper html created by bit.ly for the re-directioning… I should have known better than that! Of course all this happening while various pairs of gazing eyes were upon me… I tried again using a different flag and… nothing! and again… nothing… Pressure mounting, I decided to cut the embarrassment short and apologised. Got them to download the file in the less glamorous way by using the browser…
So, if you are ever in that predicament, here is the solution, use the -Lflag with curl:
I have had this 17-in MacBook Pro for a few years… perhaps about 8 years? Probably a bit more? In any case, I have it more as a memento than anything else as I have a more modern one these days. I still keep it updated and all the rest of it so I was rather surprised to get it out and see that the battery has effectively bursted!!! I hope the rest of the machine still works though :(
I was confronted with an old issue, that had not been an issue for a while: writing to an external hard drive that was formatted with Windows (NTFS) from my mac. I used to have NTFS-3G (together with MacFUSE) installed and that used to be fine. However, I guess something when a bit eerie with Mavericks as I was not able to get my old solution to work.
So, here is what I did (you will need superuser powers, so be prepared to type your password):
Open a Terminal (Terminal.app) and create a file called stab in the /etc folder. For instance you can type:
$ sudo nano /etc/fstab
You can now enter some information in your newly created file telling MacOS information about your device. If your external drive is called mydevice enter the following:
LABEL=mydevice none ntfs rw,auto,nobrowse
Use tabs between the fields listed above. Save your file and you are now ready to plug your device.
There is a small caveat: Once you do this, your hard drive is not going to appear in your Desktop. But do not disappear, you can still use the terminal to access the drives mounted by going to /Volumes folder as follows:
I am sure you, like me, have had the need to reduce the file size of a PDF. Take for example the occasional need of sending a PDF by email just to find out that the size is such that the message is rejected. I have used Adobe Acrobat Pro to help, but recently I came across an alternative way of achieving this: Use Colorsync Utility in a mac. Here is how:
Right click the PDF that needs reducing and select “Open with…”
Select Colorsync Utility and wait for the application to open the file
At the bottom of the status bar in the application, you can now select one of the quartz filters available
I’d like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal brothers and sisters,
and all watched over
by machines of loving grace.
I have been waiting quite a bit for the RFU to make their podcasts available via iTunes or some other similar service. I used to listen to them but for one reason or another the feed changed to the extent that no submission was done to iTunes and the RSS of the RFU’s website is basically dead.
So, inspired by a post by Rolando Garza, I decided to hack an RSS that can actually be used to download the RFU podcast and get some information about Rugby. So I used a combination of Feedity which used HTML scraping to generate an RSS of almost any page. With the help op Yahoo Pipes I managed to use the magic of regular expressions to add appropriate dates and enclosures to the feed and the result is the RFU Podcast Feed.
So, as long as the RFU does not change the way they deal with their website and the posting of their mp3 content, then you can enjoy a bit of Rugby right in your mp3 devices.