Installing MacTeX in M1 Apple Macs

UPDATE – On April 1st 2021 MacTex released the 2021 version of the software. In this new version the code is universal and works well with both Intel and Apple silicon:

All binaries in MacTeX-2021 are universal, with code for both Arm and Intel processors. The same source code is used to compile both types of code, so Apple’s Arm and Intel machines are on exactly the same footing. 

I have been playing with a new M1 MacBook Air and I must admit it is quite an experience. It is a very responsive fast little machine with no fan! Great so far. I have been looking into using a lot of the software that enables my own work and that includes, among other things, LaTeX.

I had a look at the MacTeX pages about ARM and it was great to see that the view is that Rosetta 2 would have me covered… I am not ready to go native just yet… So after installing the software I was surprised to get some messages in my terminal telling me that pdflatex could not be found.

After playing with things for a bit I found that the location of the files ended up in a folder indicating the architecture of the machine. So if you are looking for things in your machine, please take look for things in the following path:

/usr/local/texlive/2020/bin/x86_64-darwin/

I have had to point some other software to this location and things seem to be working fine. I have not looked into the implication of other things such as the conversion of jupyter notebooks into LaTeX documents…. I will do and report back.

Data Skeptic Podcast

I had an opportunity to be one of the panellists in the Data Skeptic podcast recently. It was great to have been invited and as a listener to the podcast it was a really treat to be able to take part. Also, recording it was fun…

You can listen to the episode here.

More information about the Data Skeptic Journal Club can be found in their site. I would like to thank  Kyle Polich, Lan Guo and George Kemp for having me as a guest. I hope it is not the last time!

In the episode Kyle talks about the relationship between Covid-19 and Carbon Emissions. George tells us about the new Hateful Memes Challenge from Facebook. Lan joins us to talk about Google’s AI Explorables. I talk about a paper that uses neural networks to detect infections in the ear.

Let me know what you guys think!

Apple Developer Support

It is great to see all the support that Apple Developers get in terms of tools, ecosystem, community and more.

Apple_support

For starters the Developer Support portal has a ton of information for the new comer as well as for the more expert of experts. Including guides and documentation for tools such as Xcode as well as information for developing software for MacOS and iOS.

Information about Design is available in the same place, including Human Interface Guidelines, Fonts (including downloads for San Francisco!) and information about accessibility and localisation.

Information about new tools and updates such as the latest about Swift, and SwiftUI can be easily found. And testing your apps with the help of tools such as TestFlight makes things so much easier.

Orion at the Institute of Physics

via Instagram http://bit.ly/2DGSPaI

It was great to have been able to attend a lecture at the new home of the Institute of Physics. I have been a member for almost two decades and I have even served as an officer for one of the interest groups, the Computational Physics Group is you must know.

The event was a talk by Stephen Hilton from the School of Pharmacy, UCL 3D Printing and its Application in Chemistry and Pharmacy. It was a very useful talk covering applications ranging from teaching, cost saving in chemistry labs, personalised medicine and chemistry itself.

As for the building, it was nice to finally see the end result, with a hint of brutalist architecture and some nice details such as the electromagnetic wave diagram in some of the windows, and Orion in the cealing!

Magic Mouse – Secondary Click Not Working

I have recently taken Mojave for a spin and I am really happy with the changes in the new OS. I know it is merely eye-candy, but I really like the dark theme. Things have been working well, but I came across a nagging issue with my MagicMouse:

For some reason the secondary click would simply not work. I had made sure the settings were enabled by making sure that the “Secondary Click” option was ticked (see screenshot below). I tried ticking it on and off, restarting the machine, deleting the mouse and reconnecting it… nothing had worked…

Finally I decided to take a look at some of the plist files and here is my solution to this problem:

    1. Go to the ~/Library/Preferences/ directory
    2. Delete the following files:
      com.apple.AppleMultitouchMouse.plist
      com.apple.driver.AppleBluetoothMultitouch.mouse.plist
      
      Restart the machine

Et voilà!

Persistent “Previous Recipients” in Mac Mail

Hello everyone! I am very pleased to take a question from John who got in touch with Quantum Tunnel using the form here. John’s favourite scientist is Einstein and his question is as follows:

In Mac mail I cannot delete unwanted email addresses. I have done the routine of deleting all addresses from the previous receiptant list, but when starting a new email unwanted addresses appear.. Any help is appreciated. Thanks, John

John is referring to the solution I provided in this earlier post. Sadly, the list of his lucky friends/colleagues/family (delete as appropriate) he has email recently persists even after clearing the “Previous Recipients” as explained in the post before.

There may be a way to force the clearing of these persistent email address:

  • Quit Mail and Address Book (in case the latter is open)
  • Open a terminal and type the following command:
    • `rm ~/Library/Application Support/AddressBook/MailRecents-v4.abcdmr`
  • Log out and back in again
  • Start Mail
  • You may have to clear the “Previous Recipients” list as per the post mentioned above

You should now be able to clear the list. And… In case you were wondering, the file we deleted should be created afresh to start accumulating new “recent recipients” (yay!)

Et voilà!

CoreML – iOS Implementation for the Boston Model (part 2) – Filling the Picker

Right! Where were we? Yes, last time we put together a skeleton for the CoreML Boston Model application that will take two inputs (crime rate and number of rooms) and provide a prediction of the price of a Boston property (yes, based on somewhat all prices…). We are making use of three three labels, one picker and one button.

Let us start creating variables to hold the potential values for the input variables. We will do this in the ViewController by selecting this file from the left-hand side menu:

 

 

 

 

 

 

 

Inside the ViewController class definition enter the following variable assignments:

let crimeData = Array(stride(from: 0.1, through: 0.3, by: 0.01))
let roomData = Array(4...9)

These values are informed by the data exploration we carried out in an earlier post. We are going to use the arrays defined above to populate the values that will be shown in our picker. For this we need to define a data source for the picker and make sure that there are two components to choose values from.

Before we do any of that we need to connect the view from our storyboard to the code, in particular we need to create outlets for the picker and for the button. Select the Main.storyboard from the menu in the left-hand side. With the Main.storyboard in view, in the top right-hand corner of Xcode you will see a button with an icon that has two intersecting circles, click on that icon. you will now see the storyboard side-by-side with the code. While pressing the Control key, select the picker by clicking on it; without letting go drag into the code window (you will see an arrow appear as you drag):

 

 

You will se a dialogue window where you can now enter a name for the element in your Storyboard. In this case I am calling my picker inputPicker, as shown in the figure on the left. After pressing the “connect” button a new line of code appears and you will see a small circle on top of the code line number indicating that a connection with the Storyboard has been made. Do the same for the button and call it predictButton.

 

 

In order to make our life a little bit easier, we are going to bundle together the input values. At the bottom of the ViewController code write the following:

enum inputPredictor: Int {
    case crime = 0
    case rooms
}

We have define an object called inputPredictor that will hold the values of for crime and rooms. In turn we will use this object to populate the picker as follows: In the same ViewController file, after the class definition that is provided in the project by  default we are going to write an extension for the data source. Write the following code:

extension ViewController: UIPickerViewDataSource {

    func numberOfComponents(in pickerView: UIPickerView) -> Int {
        return 2
    }

    func pickerView(_ pickerView: UIPickerView,
                    numberOfRowsInComponent component: Int) -> Int {
        guard let inputVals = inputPredictor(rawValue: component) else {
            fatalError("No predictor for component")
        }

        switch inputVals {
        case .crime:
            return crimeData.count
        case .rooms:
            return roomData.count
        }
    }
}

With the function numberOfComponents we are indicating that we want to have 2 components in this view. Notice that inside the pickerView function we are creating a constant inputVals defined by the values from inputPredictor.  So far we have indicated where the values for the picker come from, but we have not delegated the actions that can be taken with those values, namely displaying them and picking them (after all, this element is a picker!) so that we can use the values elsewhere. If you were to execute this app, you will see an empty picker…

OK, so what we need to do is create the UIPickerViewDelegate, and we do this by entering the following code right under the previous snippet:

extension ViewController: UIPickerViewDelegate {
    func pickerView(_ pickerView: UIPickerView, titleForRow row: Int,
                    forComponent component: Int) -> String? {
        guard let inputVals = inputPredictor(rawValue: component) else {
            fatalError("No predictor for component")
        }

        switch inputVals {
        case .crime:
            return String(crimeData[row])
        case .rooms:
            return String(roomData[row])
        }
    }

    func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int,
                    inComponent component: Int) {
        guard let inputVals = inputPredictor(rawValue: component) else {
            fatalError("No predictor for component")
        }

        switch inputVals {
        case .crime:
            print(String(crimeData[row]))
        case .rooms:
            print(String(roomData[row]))
        }


    }
}

In the first function we are defining what values are supposed to be shown for the titleForRow in the picker, and we do this for each of the two elements we have, i.e. crime and rooms. In the second function we are defining what happens when we didSelectRow, in other words select the value that is being shown by each of the two elements in the picker. Not too bad, right?

Well, if you were to run this application you will still see no change in the picker… Why is that? The answer is that we need to let the application know what needs to be show when the elements load. Go back to the top of the code (around line 20 or so) below the code lines that defined the outlets for the picker and the button. There write the following code:

override func viewDidLoad() {
    super.viewDidLoad()
    // Picker data source and delegate
    inputPicker.dataSource = self
    inputPicker.delegate = self
}

OK, we can now run the application: On the top left-hand side of the Xcode window you will see a play button; clicking on it will launch the Simulator and you will be able to see your picker working. Go on, select a few values from each of the elements:

In the next post we will write code to activate the button to run a prediction using our CoreML model with the values selected from the picker and show the result to the user. Stay tuned!

You can look at the code (in development) in my github site here.

CoreML – Model properties

If you have been following the posts in this open notebook, you may know that by now we have managed to create a linear regression model for the Boston Price dataset based on two predictors, namely crime rate and average number of rooms. It is by no means the best model out there ad our aim is to explore the creation of a model (in this case with Python) and convert it to a Core ML model that can be deployed in an iOS app.

Before move on to the development of the app, I thought it would be good to take a look at the properties of the converted model. If we open the PriceBoston.mlmodel we saved in the previous post (in Xcode of course) we will see the following information:

We can see the name of the model (PriceBoston) and the fact that it is a “Pipeline Regressor”. The model can be given various attributes such as Author, Description, License, etc. We can also see the listing of the Model Evaluation Parameters in the form of Inputs (crime rate and number of rooms) and Outputs (price). There is also an entry to describe the Model Class (PriceBoston) and without attaching this model to a target the class is actually not present. Once we make this model part of a target inside an app, Xcode will generate the appropriate code

Just to give you a flavour of the code that will be generated when we attach this model to a target, please take a look at the screenshot below:


You can see that the code was generated automatically (see the comment at the beginning of the Swift file). The code defines the input variables and feature names, defines a way to extract values out of the input strings, sets up the model output and other bits and pieces such as defining the class for model loading and prediction (not shown). All this is taken care of by Xcode, making it very easy for us to use the model in our app. We will start building that app in the following posts (bear with me, I promise we will get there).

Enjoy!

Core ML – What is it?

In a previous post I mentioned that I will be sharing some notes about my journey with doing data science and machine learning by Apple technology. This is the firsts of those posts and here I will go about what Core ML is…

Core ML is a computer framework. So what is a framework?  Well, in computer terms is a software abstraction that enables generic functionality to be modified as required by the user to transform it into software for specific purposes to enable the development of a system or even a humble project.

So Core ML is an Apple provided framework to speed apps that use trained machine learning models. Notice that word in bold – trained – is part of the description of the framework. This means that the model has to be developed externally with appropriate training data for the specific project in mind. For instance if you are interested in building a classifier that distinguishes cats from cars, then you need to train the model with lots of cat and car images.

As it stands Core ML supports a variety of machine learning models, from generalised linear models (GLMs for short) to neural nets. Furthermore it helps with the tests of adding the trained machine learning model to your application by automatically creating a custom programmatic interface that supplies an APU to your model. All this within the comfort of Xcode!

There is an important point to remember. The model has to be developed externally from Core ML, in other words you may want to use your favourite machine learning framework (that word again), computer language and environment to cover the different aspects of the data science workflow. You can read more in that in Chapter 3 of my “Data Science and Analytics with Python” book. So whether you use Scikit-learnm, Keras or Caffe, the model you develop has to be trained (tested and evaluated) beforehand. Once you are ready, then Core ML will support you in bringing it to the masses via your app.

As mentioned in the Core ML documentation:

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.

OK, so in the next few posts we will be using Python and coreml tools to generate a so-called .mlmodel file that Xcode can use and deploy. Stay tuned!

Anaconda – Guarenteed Python packages via Conda and Conda-Forge

During the weekend I got a member of the team getting in touch because he was unable to get a Python package working for him . He had just installed Python in his machine, but things were not quite right… For example pip was not working and he had a bit of a bother setting some environment variables… I recommended to him having a look at installing Python via the Anaconda distribution. Today he was up and running with his app.

Given that outcome, I thought it was a great coincidence that the latest episode of Talk Python To Me that started playing on my way back home happened to be about Conda and Conda-Forge. I highly recommend listening to it. Take a loook:

Talk Python To Me – Python conversations for passionate developers – #94 Guarenteed packages via Conda and Conda-Forge

Have you ever had trouble installing a package you wanted to use in your Python app? Likely it contained some odd dependency, required a compilation step, maybe even using an uncommon compiler like Fortran. Did you try it on Windows? How many times have you seen “Cannot find vcvarsall.bat” before you had to take a walk?

If this sounds familiar, you might want to check conda the package manager, Anaconda, the distribution, conda forge, and conda build. They dramatically lower the bar for installing packages on all the platforms.

This week you’ll meet Phil Elson, Kale Franz, and Michael Sarahan who all work on various parts of this ecosystem.

Links from the show:

conda: conda.pydata.org
conda-build: conda.pydata.org/docs/commands/build/conda-build.html
Anaconda distribution: continuum.io/anaconda-overview
conda-forge: conda-forge.github.io

Phil Elson on Twitter: @pypelson
Kale Franz: @kalefranz
Michael Sarahan: github.com/msarahan