A collection of posts related to Coding, Programming, Hacking and Computer Tricks.

Take a look and enjoy

white and black weekly planner on gray surface

Remove Unwanted Google Calendars

white and black weekly planner on gray surface
Photo by Bich Tran on Pexels.com

I have been using GSuite in the last year or so at work. In general it seem to be fine, good usage of the email capabilities provided by Gmail and the storage, together with shared drives, and things like that are fine.

Calendar is ok and it does the work, however there was a very irritating thing when being invited to see other colleague's calendars and/or subscriptions to them. On the one hand it is useful to see calendar availability, but I don’t want to see all of those calendars on my mobile device, or the Calendar app on my Macbook all the time.

A quick solution would be to “uncheck” the unwanted calendars on your device, but... The problem is, when you uncheck those calendars, they’re still there. You may not see them, but boy, you do continue getting reminders, notifications, alerts - and most (all?) of the time they are not even for me to act on!

So if you require to remove these extra calendar, bit still access then via the web and Google apps then do the following:

  1. Go to https://www.google.com/calendar/syncselect
  2. Login to your Google Account, and
  3. Uncheck the calendars you don't want to sync

Then you can select what device you may want to be able to see the delegated calendars in the Apple Calendar app. Simply

  1. Go to Calendar - Preferences
  2. Select Account Information and look for the Google Account in question
  3. Select the "Delegation" tab and check the things you may want to see (or not)

Et voilà!

”Read

Importing Organised Picture Folders to Apple Photos

I have been considering moving a considerable photo collection I have amassed for a few years now after getting my first digital camera. I used to take a lot of pictures before that with a beloved Cannon SLR that belonged to my father. Sadly that camera got stolen in a holiday in Cancun... but that is a story for another time.

Anyway, I used to use Picasa to organise my photos into albums and upload or share some with friends and family. Picasa was superseded by Google Photos and I never quite liked losing some control on where my photos went.

I have been an iOS user of Apple Photos -- I like the simplicity of taking a picture and it being part of an album that I keep in my phone until I clean the album... I did try using the Mac version, but as I said I never liked just getting a soup of pictures. I wanted to keep them in the album/folder hierarchy I curated myself. It is until now that I have found a way for Apple Photos to respect my hierarchy. Here is what you need to do:

  1. Find the place where your pictures are organised folders and drag the top folder onto the Photos App icon in the Dock. It does not matter if the App is running
    • NOTE: Do not drag it to the Photo App window. If you do, the applications behaves in a different way and you will end up with a soup of photographs.
  2. At the top of the window you will see a checkbox that reads “Keep Folder Organisation” on the top right (see the screenshot above)
  3. Click the blue button, “Import All New Items”

Et voilà! Your imported photos will show up in an organised folder in the sidebar.

”Read

Sci-Advent - Writing Reports Tailored for AI Readers

This is a reblog from an article by John Naughton in the Guardian on Dec 5th 2020. Read the original here.

My eye was caught by the title of a working paper published by the National Bureau for Economic Research (NBER): How to Talk When a Machine Is Listening: Corporate Disclosure in the Age of AI. So I clicked and downloaded, as one does. And then started to read.

The paper is an analysis of the 10-K and 10-Q filings that American public companies are obliged to file with the Securities and Exchange Commission (SEC). The 10-K is a version of a company’s annual report, but without the glossy photos and PR hype: a corporate nerd’s delight. It has, says one guide, “the-everything-and-the-kitchen-sink data you can spend hours going through – everything from the geographic source of revenue to the maturity schedule of bonds the company has issued”. Some investors and commentators (yours truly included) find the 10-K impenetrable, but for those who possess the requisite stamina (big companies can have 10-Ks that run to several hundred pages), that’s the kind of thing they like. The 10-Q filing is the 10-K’s quarterly little brother.

The observation that triggered the research reported in the paper was that “mechanical” (ie machine-generated) downloads of corporate 10-K and 10-Q filings increased from 360,861 in 2003 to about 165m in 2016, when 78% of all downloads appear to have been triggered by request from a computer. A good deal of research in AI now goes into assessing how good computers are at extracting actionable meaning from such a tsunami of data. There’s a lot riding on this, because the output of machine-read reports is the feedstock that can drive algorithmic traders, robot investment advisers, and quantitative analysts of all stripes.

The NBER researchers, however, looked at the supply side of the tsunami – how companies have adjusted their language and reporting in order to achieve maximum impact with algorithms that are reading their corporate disclosures. And what they found is instructive for anyone wondering what life in an algorithmically dominated future might be like.

The researchers found that “increasing machine and AI readership … motivates firms to prepare filings that are more friendly to machine parsing and processing”. So far, so predictable. But there’s more: “firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers”.

In other words, machine readability – measured in terms of how easily the information can be parsed and processed by an algorithm – has become an important factor in composing company reports. So a table in a report might have a low readability score because its formatting makes it difficult for a machine to recognise it as a table; but the same table could receive a high readability score if it made effective use of tagging.

The researchers contend, though, that companies are now going beyond machine readability to try and adjust the sentiment and tone of their reports in ways that might induce algorithmic “readers” to draw favourable conclusions about the content. They do so by avoiding words that are listed as negative in the criteria given to text-reading algorithms. And they are also adjusting the tones of voice used in the standard quarterly conference calls with analysts, because they suspect those on the other end of the call are using voice analysis software to identify vocal patterns and emotions in their commentary.

In one sense, this kind of arms race is predictable in any human activity where a market edge may be acquired by whoever has better technology. It’s a bit like the war between Google and the so-called “optimisers” who try to figure out how to game the latest version of the search engine’s page ranking algorithm. But at another level, it’s an example of how we are being changed by digital technology – as Brett Frischmann and Evan Selinger argued in their sobering book Re-Engineering Humanity.

After I’d typed that last sentence, I went looking for publication information on the book and found myself trying to log in to a site that, before it would admit me, demanded that I solve a visual puzzle: on an image of a road junction divided into 8 x 4 squares I had to click on all squares that showed traffic lights. I did so, and was immediately presented with another, similar puzzle, which I also dutifully solved, like an obedient monkey in a lab.

And the purpose of this absurd challenge? To convince the computer hosting the site that I was not a robot. It was an inverted Turing test in other words: instead of a machine trying to fool a human into thinking that it was human, I was called upon to convince a computer that I was a human. I was being re-engineered. The road to the future has taken a funny turn.

”Read

Getting Answers for Core ML deployment from my own Book

I was working today in the deployment of a small neural network model prototype converted to Core ML to be used in an iPhone app.

I was trying to find the best way to get things to work and then it occurred to me I had solved a similar issue before... where‽ when‽ aha!

The answer was actually in my Advanced Data Science and Analytics with Python.

”Read

Computer Programming Knowledge

I came across the image above in the Slack channel of the University of Hertfordshire Centre for Astrophysics Research. It summarises some of the fundamental knowledge in computer science that was assumed necessary at some point in time: Binar, CPU execution and algorithms.

They refer to 7 algorithms, but actually rather than actual algorithms they are classes:

  1. Sort
  2. Search
  3. Hashing
  4. Dynamic Programming
  5. Binary Exponentiation
  6. String Matching and Parsing
  7. Primality Testing

I like the periodic table shown at the bottom of the graphic. Showing some old friends such as Fortran, C, Basic and Cobol. Some other that are probably not used all that much, and others that have definitely been rising: Javascript, Java, C++, Lisp. It is great to se Python, number 35, listed as Multi-Paradigm!

Enjoy!

”Read

LibreOffice - Dialogue boxes showing blanks

I have been using LibreOffice on and off for a few years now and generally I think it is a great alternative to the MS Office offering. It does the tasks that are required and the improvements over different versions have been steady and useful

I had however a very strange experience in which dialogue boxes and other windows such as alerts and messages just showed blank text. It was obvious that there was some important information in them, but it was not possible to read them. In some cases it was ok... I mean I knew here the "OK" button was expected to appear, or where "Cancel" should be placed. However, it was an annoying (at best) and limiting (at worst) exoperience.

After digging in a bit I realised what the problem was. The fonts that were supposed to be showing were at fault. The culprits were as follows:

  • DINRegular.ttf, and
  • DINRegularAlternate.ttf

After removing these two fonts from ~/Library/Fonts/ everything went back to normal. I hope this helps in case you are having a similar issue.

”Read

MacOS - No Floating Thumbnail when taking a screenshot

Have you tried taking a screenshot in your Mac and are annoyed at having to wait for the floating thumbnail - in other words you wait for 5 seconds before the screenshot becomes a file? Well here you can find out how to get rid of that.

Follow these steps:

1) Type CMD + SHIFT + 5
2) Click OPTIONS
3) Uncheck "Show Floating Thumbnail"
4) Et voilà!

See the screenshot above!

”Read

Adding new conda environment kernel to Jupyter and nteract

I know there are a ton of posts out there covering this very topic. I am writing this post more for my out benefit, so that I have a reliable place to check the commands I need to add a new conda environment to my Jupyter and nteract IDEs.

First to create an environment that contains, say TensorFlow, Pillow, Keras and pandas we need to type the following in the command line:

$ conda create -n tensorflow_env tensorflow pillow keras pandas jupyter ipykernel nb_conda

Now, to add this to the list of available environments in either Jupyter or nteract, we type the following:

$ conda activate tensor_env

$ python -m ipykernel install --name tensorflow_env


$ conda deactivate


Et voilà, you should now see the environment in the dropdown menu!

”Read

Python - Pendulum

Working with dates and times in programming can be a painful test at times. In Python, there are some excellent libraries that help with all the pain, and recently I became aware of Pendulum. It is effectively are replacement for the standard datetime class and it has a number of improvements. Check out the documentation for further information.

Installation of the packages is straightforward with pip:

$ pip install pendulum

For example, some simple manipulations involving time zones:

import pendulum

now = pendulum.now('Europe/Paris')

# Changing timezone
now.in_timezone('America/Toronto')

# Default support for common datetime formats
now.to_iso8601_string()

# Shifting
now.add(days=2)

Duration can be used as a replacement for the standard timedelta class:

dur = pendulum.duration(days=15)

# More properties
dur.weeks
dur.hours

# Handy methods
dur.in_hours()
360
dur.in_words(locale='en_us')
'2 weeks 1 day'

It also supports the definition of a period, i.e. a duration that is aware of the DateTime instances that created it. For example:

dt1 = pendulum.now()
dt2 = dt1.add(days=3)

# A period is the difference between 2 instances
period = dt2 - dt1

period.in_weekdays()
period.in_weekend_days()

# A period is iterable
for dt in period:
    print(dt)


Give it a go, and let me know what you think of it. 

”Read

File Encoding with the Command Line - Determining and Converting

With the changes that Python 3 has brought to bear in terms of dealing with character encodings, I have written before some tips that I use on my day to day work. It is sometimes useful to determine the character encoding of a files at a much earlier stage. The command line is a perfect tool to help us with these issues. 

The basic syntax you need is the following one:

$ file -I filename

Furthermore, you can even use the command line to convert the encoding of a file into another one. The syntax is as follows:

$ iconv -f encoding_source -t encoding_target filename

For instance if you needed to convert an ISO88592 file called input.txt into UTF8 you can use the following line:

$ iconv -f iso-8859-1 -t utf-8 < input.txt > output.txt

If you want to check a list of know coded characters that you can handle with this command simply type:

$ iconv --list

Et voilà!

 

”Read

IEEE Language Rankings 2018

Python retains its top spot in the fifth annual IEEE Spectrum top programming language rankings, and also gains a designation as an "embedded language". Data science language R remains the only domain-specific slot in the top 10 (where it as listed as an "enterprise language") and drops one place compared to its 2017 ranking to take the #7 spot.

Looking at other data-oriented languages, Matlab as at #11 (up 3 places), SQL is at #24 (down 1), Julia at #32 (down 1) and SAS at #40 (down 3). Click the screenshot below for an interactive version of the chart where you can also explore the top 50 rankings.

Language Rank

The IEEE Spectrum rankings are based on search, social media, and job listing trends, GitHub repositories, and mentions in journal articles. You can find details on the ranking methodology here, and discussion of the trends behind the 2018 rankings at the link below.

IEEE Spectrum: The 2018 Top Programming Languages

”Read

Building things with Python

Very pleased to see some of the things we are building with the Intro to Python for Data Science class this evening.

”Read

Finding iBooks Files in My Mac

I was looking for the location of iBooks files (including ePub, PDFs and others) so that I can curate the list of manually exported files. Finding iBooks in my Mac should not be a difficult task, although it took a few minutes. I thought of sharing that here in the blog for future reference and in the hope that some of yo may find it useful.

We will use the Terminal, as doing things from Finder tends to redirect us. A first place to look into is the following one:

/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks

Now, that may not be the entire list of your books. In case you have enabled iCloud, then things may be stored in your Mobile Documents folder:

cd ~/Library/Mobile Documents/iCloud~com~apple~iBooks/Documents/

For things that you have bought in the iBooks store, take a look here:

cd ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks

Et voilà!

 

”Read

CoreML - Boston Model: The Complete App

Look how far we have come... We started this series by looking at what CoreML is and made sure that our environment was suitable. We decided to use linear regression as our model, and chose to use the Boston Price dataset in our exploration for this implementation. We built our model using Python and created our .mlmodel object and had a quick exploration of the model's properties. We then started to build our app using Xcode (see Part 1, Part 2 and Part 3). In this final part we are going to take the .mlmodel and include it in out Xcode project, we will then use the inputs selected from out picker and calculate a prediction (based on our model) to be displayed to the user. Are you ready? Nu kör vi!

Let us start by adding the .mlmodel we created earlier on so that it is an available resource in our project. Open your Xcode project and locate your PriceBoston.mlmodel file. From the menu on the left-hand side select the "BostonPricer" folder. At the bottom of the window you will see a + sign, click on it and select "New Groups". This will create a sub-folder within "BostonPricer". Select the new folder and hit the return key, this will let you rename the folder to something more useful. In this case I am going to call this folder "Resources".

Open Finder and navigate to the location of your BostonPricer.mlmodel. Click and drag the file inside the "Resources" folder we just created. This will open a dialogue box asking for some options for adding this file to your project. I selected the "Create Folder References" and left the rest as it was shown by default. After hitting "Finish" you will see your model now being part of your project. Let's now go the code in ViewController and make some needed changes.  The first one is to tell our project that we are going to need the powers of the CoreML framework. At the top of the file, locate a line of code that imports UIKit, right below it type the following:

import CoreML

Inside the definition of the ViewController class, let us define a constant to reference the model. Look for the definitions of the crimeData and roomData constants and nearby them type the following:

let model = PriceBoston()

You will see that when you start typing the name of the model, Xcode will suggest the right name as it knows about the existence of the model as part of its resources, neat!

We need to make some changes to the getPrediction()function we created in the last post. Go to the function and look for place where we pick the values of crime and rooms and right after that write the following:

guard let priceBostonOutput = try? model.prediction(
            crime:crime,
            rooms: Double(rooms)
            ) else {
                fatalError("Unexpected runtime error.")
        }

You may get a warning telling you that the constant priceBostonOutput was defined but not used. Don't worry, we will indeed use it in a little while. Just a couple of words about this piece of code, you will see that we are using the prediction method defined in the model and that we are passing the two input parameters that the model expects, namely crime and rooms. We are wrapping this call to the prediction method around a try statement so that we can catch any exceptions. This is where we are implementing our CoreML mode!!! Isn't that cool‽

We are not done yet though; remember that we have that warning from Xcode about using the model. Looking at the properties of the model, we can see that we also have an output attribute called price. This is the prediction we are looking for and the one we would like to display. Out of the box it may have a lot of decimal figures, and it is never a good practice to display those to the user (although they are important in precision terms...). Also, with Swift's strong typing we would have to typecast the double returned by the model into a string that can be printed. So, let us prepare some code to format the predicted price. At the top of the ViewController class, find the place where we defined the constants crimeData and roomData. Below them type the following code:

let priceFormat: NumberFormatter = {
        let formatting = NumberFormatter()
        formatting.numberStyle = .currency
        formatting.maximumFractionDigits = 2
        formatting.locale = Locale(identifier: "en_US")
        return formatting
    }()

We are defining a format that will show a number as currency in US dollars with two decimal figures. We can now pass our predicted price to this formatter and assign it to a new constant for future reference. Below the code where the getPrediction function was defined, write the following:

let priceText = priceFormat.string(from: NSNumber(value:
            priceBostonOutput.price))

Now we have a nicely formatted string that can be used in the display. Let us change the message that we are asking our app to show when pressing the button:

let message = "The predicted price (in $1,000s) is " + priceText!

We are done! Launch your app simulator, select a couple of values from the picker and hit the "Calculate Prediction" button... Et voilà, we have completed our first implementation of a CoreML model in a working app.

There are many more things that we can do to improve the app. For instance, we can impose some constraints on the position of the different elements shown in the screen so that we can deploy the application in the various screen sizes offered by Apple devices. Improve the design and usability of the app and designing appropriate icons for the app (in various sizes). For the time being, I will leave some of those tasks for later. In the meantime you can take a look at the final code in my github site here.

Enjoy and do keep in touch, I would love to hear if you have found this series useful.

 

”Read

JupyterLab is Ready for Users

This is a reblog of this original post.

JupyterLab is Ready for Users

We are proud to announce the beta release series of JupyterLab, the next-generation web-based interface for Project Jupyter.

Project Jupyter Feb 20
tl;dr: JupyterLab is ready for daily use (documentation, try it with Binder)

1*_jDTWlZNUySwrRBgVNqoNw.png&amp;amp;lt;img class="progressiveMedia-noscript js-progressiveMedia-inner" src="<a href= "https://cdn-images-1.medium.com/max/1600/1*_jDTWlZNUySwrRBgVNqoNw.png">https://cdn-images-1.medium.com/max/1600/1*_jDTWlZNUySwrRBgVNqoNw.png</a>"&amp;amp;gt;

JupyterLab is an interactive development environment for working with notebooks, code, and data.

The Evolution of the Jupyter Notebook

Project Jupyter exists to develop open-source software, open standards, and services for interactive and reproducible computing.

Since 2011, the Jupyter Notebook has been our flagship project for creating reproducible computational narratives. The Jupyter Notebook enables users to create and share documents that combine live code with narrative text, mathematical equations, visualizations, interactive controls, and other rich output. It also provides building blocks for interactive computing with data: a file browser, terminals, and a text editor.

The Jupyter Notebook has become ubiquitous with the rapid growth of data science and machine learning and the rising popularity of open-source software in industry and academia:

  • Today there are millions of users of the Jupyter Notebook in many domains, from data science and machine learning to music and education. Our international community comes from almost every country on earth.¹
  • The Jupyter Notebook now supports over 100 programming languages, most of which have been developed by the community.
  • There are over 1.7 million public Jupyter notebooks hosted on GitHub. Authors are publishing Jupyter notebooks in conjunction with scientific research, academic journals, data journalism, educational courses, and books.

At the same time, the community has faced challenges in using various software workflows with the notebook alone, such as running code from text files interactively. The classic Jupyter Notebook, built on web technologies from 2011, is also difficult to customize and extend.

JupyterLab: Ready for Users

JupyterLab is an interactive development environment for working with notebooks, code and data. Most importantly, JupyterLab has full support for Jupyter notebooks. Additionally, JupyterLab enables you to use text editors, terminals, data file viewers, and other custom components side by side with notebooks in a tabbed work area.

1*O20XGvUOTLoFKQ9o20usIA.png&amp;amp;lt;img class="progressiveMedia-noscript js-progressiveMedia-inner" src="<a href= "https://cdn-images-1.medium.com/max/1600/1*O20XGvUOTLoFKQ9o20usIA.png">https://cdn-images-1.medium.com/max/1600/1*O20XGvUOTLoFKQ9o20usIA.png</a>"&amp;amp;gt;

JupyterLab enables you to arrange your work area with notebooks, text files, terminals, and notebook outputs.JupyterLab provides a high level of integration between notebooks, documents, and activities:

  • Drag-and-drop to reorder notebook cells and copy them between notebooks.
  • Run code blocks interactively from text files (.py, .R, .md, .tex, etc.).
  • Link a code console to a notebook kernel to explore code interactively without cluttering up the notebook with temporary scratch work.
  • Edit popular file formats with live preview, such as Markdown, JSON, CSV, Vega, VegaLite, and more.

JupyterLab has been over three years in the making, with over 11,000 commits and 2,000 releases of npm and Python packages. Over 100 contributors from the broader community have helped build JupyterLab in addition to our core JupyterLab developers.

To get started, see the JupyterLab documentation for installation instructions and a walk-through, or try JupyterLab with Binder. You can also set up JupyterHub to use JupyterLab.

Customize Your JupyterLab Experience

JupyterLab is built on top of an extension system that enables you to customize and enhance JupyterLab by installing additional extensions. In fact, the builtin functionality of JupyterLab itself (notebooks, terminals, file browser, menu system, etc.) is provided by a set of core extensions.

1*OneJZOqKqBZ9oN80kRX7kQ.png&amp;amp;lt;img class="progressiveMedia-noscript js-progressiveMedia-inner" src="<a href= "https://cdn-images-1.medium.com/max/1600/1*OneJZOqKqBZ9oN80kRX7kQ.png">https://cdn-images-1.medium.com/max/1600/1*OneJZOqKqBZ9oN80kRX7kQ.png</a>"&amp;amp;gt;

JupyterLab extensions enable you to work with diverse data formats such as GeoJSON, JSON and CSV.²Among other things, extensions can:

  • Provide new themes, file editors and viewers, or renderers for rich outputs in notebooks;
  • Add menu items, keyboard shortcuts, or advanced settings options;
  • Provide an API for other extensions to use.

Community-developed extensions on GitHub are tagged with the jupyterlab-extension topic, and currently include file viewers (GeoJSON, FASTA, etc.), Google Drive integration, GitHub browsing, and ipywidgets support.

Develop JupyterLab Extensions

While many JupyterLab users will install additional JupyterLab extensions, some of you will want to develop your own. The extension development API is evolving during the beta release series and will stabilize in JupyterLab 1.0. To start developing a JupyterLab extension, see the JupyterLab Extension Developer Guide and the TypeScript or JavaScript extension templates.

JupyterLab itself is co-developed on top of PhosphorJS, a new Javascript library for building extensible, high-performance, desktop-style web applications. We use modern JavaScript technologies such as TypeScript, React, Lerna, Yarn, and webpack. Unit tests, documentation, consistent coding standards, and user experience research help us maintain a high-quality application.

JupyterLab 1.0 and Beyond

We plan to release JupyterLab 1.0 later in 2018. The beta releases leading up to 1.0 will focus on stabilizing the extension development API, user interface improvements, and additional core features. All releases in the beta series will be stable enough for daily usage.

JupyterLab 1.0 will eventually replace the classic Jupyter Notebook. Throughout this transition, the same notebook document format will be supported by both the classic Notebook and JupyterLab.

Get Involved

There are many ways you can participate in the JupyterLab effort. We welcome contributions from all members of the Jupyter community:

  • Use our extension development API to make your own JupyterLab extensions. Please add the jupyterlab-extension topic if your extension is hosted on GitHub. We appreciate feedback as we evolve toward a stable API for JupyterLab 1.0.
  • Contribute to the development, documentation, and design of JupyterLab on GitHub. To get started with development, please see our Contributing Guide and Code of Conduct. We label issues that are ideal for new contributors as “good first issue” or “help wanted”.
  • Connect with us on our GitHub Issues page or on our Gitter Channel. If you find a bug, have questions, or want to provide feedback, please join the conversation.

We are thrilled to see how you use and extend JupyterLab.

Sincerely,

The JupyterLab Team and Project Jupyter

We thank Bloomberg and Anaconda for their support and collaboration in developing JupyterLab. We also thank the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation, and the Helmsley Charitable Trust for their support.

[1] Based on the 249 country codes listed under ISO 3166–1, recent Google analytics data from 2018 indicates that jupyter.org has hosted visitors from 213 countries.

[2] Data visualized in this screenshot is licensed CC-BY-NC 3.0. See http://datacanvas.org/public-transportation/ for more details.

”Read

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: