Launch with AI in 1 week or less

https://www.reddit.com/r/spaceporn/comments/81q31g/twin_engine_atlas_v_with_four_solid_rocket_motors/Whether you’re a new startup or an existing business, here’s one way you can get an AI-enabled product or service into production in 1 week or less. An…

How I trained a language detection AI in 20 minutes with a 97% accuracy

Weird — I actually kind of look like that guyThis story is a step-by-step guide to how I built a language detection model using machine learning (that ended up being 97% accurate) in under 20 minutes.Language detection is a great use case for machine l…

Diabetes Prediction — Artificial Neural Network Experimentation

Diabetes Prediction — Artificial Neural Network ExperimentationBeing a data science profesional, we tend to learn about all the available techniques to crunch our data and deduce meaningful insights from them. In this article, I have described my exper…

Friendlier data labelling using generated Google Forms

Manually labelling data is nobodies favourite machine learning chore. You needn’t worry though about asking others to help out provided you can give them a pleasant tool for the task. Let me present to you: generated Google Forms using Google App Script!

Google App Scripts allow you to build automation between Google Apps

The regular way people might label data is just by typing in the labels into a spreadsheet. I would normally do this as well, however in a recent task I needed to label paragraphs of text. Have you ever tried to read paragraphs of text in a spreadsheet?.. it’s hell! Luckily whilst trying to figure out a way to make the labelling process less gruelling I came across a way of auto generating a form based on data in a spreadsheet document using Google App Script.

Nasty! Nobody wants to strain their eyes trying to read documents in spreadsheet cells!

Creating the script that will generate our Form

To get started we just jump into the App Script editor from within the Google Spread Sheet containing the data we want to gather labels for:

Opening the App Script editor from a Google Spreadsheet

Using App Script (pssst! it’s just JavaScript) we can read the spreadsheet data and send commands to other Google Apps (in this case the Google Forms).

What’s great about using Forms for labelling is that you can guarantee consistency in the user input by specifying the data input type. For example:

Number range:

form.addScaleItem()
.setTitle(dataToLabel)
.setBounds(1, 10)
.setRequired(true);

Binary label:

form.addCheckboxItem()
.setTitle(dataToLabel)
.setChoices([
item.createChoice('Is a cat')
])

Multi class label

form.addMultipleChoiceItem()
.setTitle(dataToLabel)
.setChoices([
item.createChoice('Cats'),
item.createChoice('Dogs'),
item.createChoice('Fish')
])

See the details for more input types in the App Script API docs (or just look at the different input types when manually creating a Google Form).

You can grab the script I have used to generate a Form for labelling text documents with numbers 0 to 10 from my Github:

ZackAkil/friendlier-data-labelling

After you have your script written (or copy and pasted); you then select your scripts’ entry point and run it! Warning You’re probably going to have to jump through a few authorisation hoops the first time you do it.

Make sure to select the entry point function of the script before running.

Using the generated Form

After the script has run, you can head over to your Google Forms and there you should find a brand new Form! You can send the Form to whoever you want to do the labelling:

Finally you can send your labellers a convenient link to a familiar Google Form that they can use to carry out the labelling task.

Accessing the data labels

After the labelling is done, you can then just view the labels as a spreadsheet and export as a CSV:

It’s pretty straight forward to get the labels out as a CSV.

Hopefully this saves you a bit of headache in your future machine learning efforts!

The full script and dataset used in this article can be found on my Github:

ZackAkil/friendlier-data-labelling


Friendlier data labelling using generated Google Forms was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Using Deep Learning to improve FIFA 18 graphics

Comparison of Cristiano Ronaldo’s face, with the left one from FIFA 18 and the right one generated by a Deep Neural Network.Game Studios spend millions of dollars and thousands of development hours designing game graphics in trying to make them look as…

Machine Learning with IBM PowerAI: Getting Started with Image Classification (Part 1)

IBM Power Systems

Introduction

Image classification has become one of the key pilot use-cases for demonstrating machine learning. In this short article, I attempt to write about how to implement such a solution using IBM PowerAI, and compare GPU and CPU performances while running this on IBM Power Systems.

Artificial Intelligence

Artificial Intelligence is currently seen as a branch of computer science that deals with making computers perform tasks like visual recognition, speech identification, cognitive decision-making, language translation etc, which are traditionally attributed to human intelligence.

Machine Learning

Machine Learning, commonly viewed as an application of Artificial Intelligence, deals with giving the systems an ability to learn and improve with experience, without explicitly coding all tasks.

Deep Learning

Deep Learning is a subset of Machine Learning where the systems can learn with labelled training data (supervised) or unlabeled training data (unsupervised). Deep Learning typically uses a hierarchical level of artificial neural networks to carry out a task.

Artificial Neural Networks

Artificial Neural Networks are systems inspired by biological neural networks and can perform certain tasks like image classification with amazing accuracy. For example, for image classification, a set of images of an animal are provided with labeling. This is the training data. The Artificial Neural Network, over a series of steps (or layers), helps the system learn the ability to classify unlabeled images (An image of an Orangutan in the example shown in this article) as belonging to a certain group while coming up with accuracy scores.

There are several applications of deep learning for your business, ranging from cellphone personal assistants to self-driving cars where rapidly changing patterns are used to classify objects in real-time.

What is IBM PowerAI?

IBM PowerAI software lets you easily run all the popular machine learning frameworks with minimal effort on your IBM POWER9 servers which contain a GPU. CPUs were designed and built for serial processing and contain a small number of cores, whereas GPUs can contain thousands of smaller cores and rely on parallel processing of tasks. Tasks meant for machine learning are key applications of GPUs. Check out the IBM Power System AC922 servers, touted as one of the best servers in the market for running enterprise AI tasks. IBM PowerAI currently includes the following frameworks;

Source: https://www.ibm.com/us-en/marketplace/deep-learning-platform

Current setup

For this demo, I used a container on a VM running Ubuntu on Power (ppc64le), hosted on Nimbix Cloud.

A Container is a running instance of an image. An image is a template which contains the OS, Software and application code, all bundled in one file. Images are defined using a Dockerfile, which is a list of steps to configure the image. The Dockerfile is built to create an image, and the image is run to get a running container. To run the image, you need to have Docker Engine installed and configured on the VM.

Here is the Dockerfile I used, written by Indrajit Poddar. This is taken from this Github page.

https://medium.com/media/03785aa96bf3b9e5fc216cb45ffc6f97/href

This builds an image with Jupyter Notebook, iTorch Kernel (we’ll discuss this in the second part) and some base TensorFlow examples.

TensorFlow is an open source, scalable library for Machine Learning applications, and is based on the concept of a data flow graph which can be built and executed. A graph can contain two components, nodes and edges (or tensors). It comes with a Python API, and is easy to assemble a net, assign parameters and run your training models.

The steps below were demonstrated by Indrajit Poddar. He has built a test image on Nimbix Cloud which will run the aforementioned services when deployed, in a few minutes.

The following command is used to verify if the GPU is attached to the container.

root@JARVICENAE-0A0A1841:/usr/lib/nvidia-384# nvidia-smi
Thu Feb 1 23:45:11 2018
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+
| NVIDIA-SMI 384.111 Driver Version: 384.111 |
| — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — +
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2… Off | 00000003:01:00.0 Off | 0 |
| N/A 40C P0 42W / 300W | 299MiB / 16276MiB | 0% Default |
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+

I see an Nvidia Tesla P100 GPU attached. The following command shows the installed Jupyter Notebook instances and the associated tokens that will be used for authentication later.

root@JARVICENAE-0A0A1841:/usr/lib/nvidia-384# jupyter notebook list
Currently running servers:
http://0.0.0.0:8889/?token=d0f34d33acc9febe500354a9604462e8af2578f338981ad1 :: /opt/DL/torch
http://0.0.0.0:8888/?token=befd7faf9b806b6918f0618a28341923fb9a1e77d410b669 :: /opt/DL/caffe-ibm
http://0.0.0.0:8890/?token=a9448c725c4ce2af597a61c47dcdb4d1582344d494bd132f :: /opt/DL/tensorflow
root@JARVICENAE-0A0A1841:/usr/lib/nvidia-384#

Starting Image Classification

What is Caffe?

Caffe (Convolutional Architecture for Fast Feature Embedding) was developed at the Berkeley Vision and Learning Center. It is an open source framework for performing tasks like image classification. It supports CUDA, Convolutional Neural Networks, has pre-trained models, and is therefore a good choice for this demo.

We’ll use Python to perform all the tasks. The steps below were done via Jupyter Notebook. First, let’s set up Python, Numpy, and Matplotlib.

import numpy as np
import matplotlib.pyplot as plt
# display plots in this notebook
%matplotlib inline
# set display defaults
plt.rcParams[‘figure.figsize’] = (10, 10) # large images
plt.rcParams[‘image.interpolation’] = ‘nearest’ # don’t interpolate: show square pixels
plt.rcParams[‘image.cmap’] = ‘gray’ # use grayscale output rather than a (potentially misleading) color heatmap
# Then, we load Caffe. The caffe module needs to be on the Python path;
# we’ll add it here explicitly.
import sys
caffe_root = ‘../’ # this file should be run from {caffe_root}/examples (otherwise change this line)
sys.path.insert(0, caffe_root + ‘python’)
import caffe

What is Caffenet?

Caffenet is a convolutional neural network written to interface with CUDA, with the primary aim of classifying images. Caffenet is a variant of Alexnet. A presentation from 2015 by the creators of Alexnet is here. In the code below, we download a pre-trained model.

import os
if os.path.isfile(caffe_root + ‘models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel’):
print ‘CaffeNet found.’
else:
print ‘Downloading pre-trained CaffeNet model…’
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet

Here is the output.

CaffeNet found.
Downloading pre-trained CaffeNet model...
…100%, 232 MB, 42746 KB/s, 5 seconds passed

Then, we load Caffe in CPU mode and work with input preprocessing.

caffe.set_mode_cpu()
model_def = caffe_root + ‘models/bvlc_reference_caffenet/deploy.prototxt’
model_weights = caffe_root + ‘models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel’
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don’t perform dropout)

Caffenet’s ‘caffe.io.Transformer’ is used. This is the default transformer used in all examples. It creates a transformed mean value for an image based on the input provided. Caffenet is setup to get input images in the BGR format with values in the range 0 to 255. Transformation to load images with values in the range of 0 to 1 in RGB format, as input needed for Matplotlib, are performed.

# load the mean ImageNet image (as distributed with Caffe) for subtraction
mu = np.load(caffe_root + ‘python/caffe/imagenet/ilsvrc_2012_mean.npy’)
mu = mu.mean(1).mean(1) # average over pixels to obtain the mean (BGR) pixel values
print ‘mean-subtracted values:’, zip(‘BGR’, mu)
# create transformer for the input called ‘data’
transformer = caffe.io.Transformer({‘data’: net.blobs[‘data’].data.shape})
transformer.set_transpose(‘data’, (2,0,1)) # move image channels to outermost dimension
transformer.set_mean(‘data’, mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale(‘data’, 255) # rescale from [0, 1] to [0, 255]
transformer.set_channel_swap(‘data’, (2,1,0)) # swap channels from RGB to BGR

In other words, computers can now learn to classify an image by first converting the image to an array of RGB values. Then, these values are scanned to look for patterns of values that already match another image in a pre-trained model. While comparing, confidence metrics are generated which show how accurate the classification was.

Here is the output.

mean-subtracted values: [(‘B’, 104.0069879317889), (‘G’, 116.66876761696767), (‘R’, 122.6789143406786)]

Classification

Here, we set the default size of the images. This can be changed later depending on your input.

net.blobs[‘data’].reshape(
50, # batch size
3, # 3-channel (BGR) images
720, 720) # image size is 720x720

Next, we load the image of an Orangutan from the Wiki Commons library.

# download the image
my_image_url = “https://upload.wikimedia.org/wikipedia/commons/b/be/Orang_Utan%2C_Semenggok_Forest_Reserve%2C_Sarawak%2C_Borneo%2C_Malaysia.JPG" # paste your URL here
!wget -O image.jpg $my_image_url
# transform it and copy it into the net
image = caffe.io.load_image(‘image.jpg’)
transformed_image = transformer.preprocess(‘data’, image)
plt.imshow(image)

Here is the output.

--2018-02-02 00:27:52--  https://upload.wikimedia.org/wikipedia/commons/b/be/Orang_Utan%2C_Semenggok_Forest_Reserve%2C_Sarawak%2C_Borneo%2C_Malaysia.JPG
Resolving upload.wikimedia.org (upload.wikimedia.org)... 198.35.26.112, 2620:0:863:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|198.35.26.112|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1443340 (1.4M) [image/jpeg]
Saving to: 'image.jpg'
image.jpg           100%[===================>]   1.38M  5.25MB/s    in 0.3s
2018-02-02 00:27:54 (5.25 MB/s) - 'image.jpg' saved [1443340/1443340]

Now, let’s classify the image.

# copy the image data into the memory allocated for the net
net.blobs[‘data’].data[…] = transformed_image
# perform classification
output = net.forward()
​output_prob = output[‘prob’][0] # the output probability vector for the first image in the batch
​print ‘predicted class is:’, output_prob.argmax()

The output was ‘predicted class is: 281’.

The output above classifies the image into class 281. Let’s load the ImageNet labels and view the output.

# load ImageNet labels
labels_file = caffe_root + ‘data/ilsvrc12/synset_words.txt’
if not os.path.exists(labels_file):
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(labels_file, str, delimiter=’\t’)
print ‘output label:’, labels[output_prob.argmax()]

Here’s the output. The class was correct!

output label: n02480495 orangutan, orang, orangutang, Pongo pygmaeus

The following code helps you come up with other top classes.

# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5] # reverse sort and take five largest items
print ‘probabilities and labels:’
zip(output_prob[top_inds], labels[top_inds])

Here is the output.

probabilities and labels:
[(0.96807814, 'n02480495 orangutan, orang, orangutang, Pongo pygmaeus'),
(0.030588957, 'n02492660 howler monkey, howler'),
(0.00085891742, 'n02493509 titi, titi monkey'),
(0.00015429058, 'n02493793 spider monkey, Ateles geoffroyi'),
(7.259626e-05, 'n02488291 langur')]

Analyzing GPU Performance

Here is the time taken to perform the classification on the CPU only mode.

%timeit net.forward()

Here is the output.

OUTPUT: 1 loop, best of 3: 3.06 s per loop

Three seconds per loop is quite long. Let’s switch to GPU mode and perform the same.

caffe.set_device(0) # if we have multiple GPUs, pick the first one
caffe.set_mode_gpu()
net.forward() # run once before timing to set up memory
%timeit net.forward()

Here is the output.

OUTPUT: 1 loop, best of 3: 11.4 ms per loop

That is an improvement of 3048.6 milliseconds! This concludes the first part of this blog. I apologize for grammatical errors, if any.

In the next part, we will take a look at how to train your own model using NVIDIA Digits and how to use Torch.

If you’ve enjoyed this piece, go ahead, give it a clap 👏🏻 (you can clap more than once)! You can also share it somewhere online so others can read it too.

Author: Upendra Rajan


Machine Learning with IBM PowerAI: Getting Started with Image Classification (Part 1) was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Machine learning explained: Understanding supervised, unsupervised, and reinforcement learning

Once we start delving into the concepts behind Artificial Intelligence (AI) and Machine Learning (ML), we come across…

The post Machine learning explained: Understanding supervised, unsupervised, and reinforcement learning appeared first on Big Data Made Simple – One source. Many perspectives..

The AGI/Deep Learning Connection

Artificial General Intelligence

As an amazing course on AGI at MIT by Lex Fridman, one of my favourite lecturers, is about to begin (or might already have kicked off by the time this article is posted), I felt like writing about the very same topic that I have been reading for quite a few months now.

“Almost all young people working on Artificial Intelligence look around and say – What’s popular? Statistical learning. So I’ll do that. That’s exactly the way to kill yourself scientifically. “

– Marvin Minsky during his course called Society of Mind at MIT in 2011.

Marvin Minsky, the famous American cognitive scientist and co-founder of MIT’s AI Laboratory, never agreed for a way too simple approach towards AGI or replicating functionality of brain for that matter. But we still can’t deny the progress deep learning has brought about in the field. There could be high chances that the brain does not do gradient descent and stuff but it’s unfair to undermine deep learning on the basis of that.

There are many things about the brain that we don’t possess while creating a replica of it. Also, deep learning would definitely prove to be an essential component to create truly intelligent machines but probably not enough alone.

It can be stated that the core idea behind the work of Ben Goertzel is something the vast majority of curious minds would’ve thought of long back; programmatically designing human parts as components of AGI to create intelligence close to that of humans if not at par with them, by using the concept of cognitive synergy. But no one really was able to bring this thought to reality… He did. There are many great researchers worth mentioning like Marcus Hutter (AIXI) and Pei Wang (NARS) to name a few.

With its fair share of flaws and probably also not enough funding that is available with tech giants like Google, Facebook and Tesla, OpenCog’s contribution towards creating AGI can simply not be ignored at any cost.

Guess what? Even Ben’s work uses deep learning! There are numerous reasons why deep learning cannot be done away with. There is a need to rethink backpropagation as said by Geoff Hinton himself, but no one can deny that it has been a huge success. Maybe someone will come up with an even more sensible alternative for deep learning and backpropagation in particular, due to its use in a plethora of applications.

People who haven’t watched the movie ‘Ex Machina’ should — 
i) Skip to the next paragraph
ii) Watch the movie very soon!

I vividly remember the scene where Ava is running towards her own creator Nathan to kill him. It does send chills through the viewer’s body.

Such things have been shown time and time again in various sci-fi movies and also been supported by very well known people in the field who are warning us about AI. What if our creations become the cause of our death? This brings up the need to include coexistence and ‘benevolence’ in the robots so that they don’t fear us. But then again, there have been strong claims that AI today hasn’t even reached the level of intelligence of a mouse.

I believe a lot in Ben Goertzel’s idea of the need for cognitive synergy. What’s that? It is the combination of different components designed to be intelligent, to form a cognitive system wherein they will help each other out to perform their respective tasks for the system to be capable of being called truly intelligent irrespective of whether it has faced a particular situation in the real world before or not.

One might take the example of transfer learning wherein just the last few layers are changed and trained whereas most part of the model remains the same.

One could also imagine using this for several tasks that would need prediction. Based on the requirement of the system, a model could be trained to learn which situation needs what type of last layers to be used for successful completion of the task with efficiency more than that of humans (because if not, then umm… what’s the point?)

Yann LeCun’s post about Sophia

I obviously cannot vouch for how real Sophia actually is, how close to intelligence is this Twitter using robot, and how many of its functions is it actually performing and not someone from the developers team but I do genuinely appreciate the idea and the effort put in to implement it irrespective of the end product and its viability.

Now that I’ve mentioned Yann LeCun’s post, I would also like to refer to the famous debate between him and Gary Marcus (@GaryMarcus). One of his statements as a response to Gary’s views, that I personally found the most important one —

“Does AI need more innate machinery? The answer probably is yes but the answer is also, not as much as Gary thinks. ”

- Yann LeCun, FAIR and NYU

While Gary brought up certain very interesting points for everyone to think about, there are moments when they start sounding extreme. The Facebook AI Director on the other hand was very calm and tackled all arguments very sensibly.

Ali Rahimi’s view on Gary Marcus’s Paper on drawbacks of Deep Learning

It is true that the things both of the highly respected individuals agree on are very fundamental problems with the current state of AI worldwide, but still Yann LeCun’s stand with respect to the debate topic was more acceptable as well as logical while Gary Marcus was criticized for certain remarks that he made even in his recent papers.

One of the most well received views on Gary’s paper by Thomas Dietterich

I’m definitely not in some reputed position to be commenting or throwing irrational opinions around or take sides on such great thought processes that had been born decades before I was born, but consider these as thoughts of someone who has been closely following the works of pioneers of the field.

I am just a believer of the idea that if AGI can be created one day in the future then deep learning is definitely going to have a vital role to play in its proper functioning. Companies like OpenAI inspire me and many others to believe that putting in hours into research and thinking about applications of AI will definitely yield amazing results that will revolutionize the way we live our lives.

It would be something combining different fields of study like neuroscience, philosophy, mathematics, physics, computer science and many more that would together contribute in a masterpiece that would prove to be the best creation of mankind.

Indeed, the field of AI is split in various groups believing in different ways to approaching the solution to the problem of AGI and none of them seem incorrect.

These groups (with either symbolic or behavioral or any other approaches towards AGI) with their different ideologies and fair shares of drawbacks in their approaches need to be combined in a certain way that each one could nullify the other’s drawback, which again brings us back to the idea of cognitive synergy. You see, none of the ideas can be ignored completely. Every approach, every attempt, every single line of code written towards a successful implementation of AGI is important.

At the same time, even developing principles of governance for usage of AGI by nations should be laid down as well as integrated into the machines so that they can’t be used for wrong purposes, which is exactly what Ethical AI is all about.

There are several questions like this looming over the concept of AGI that need to be solved in order to actually achieve even more substantial and ground breaking outcomes in that area. Let’s hope that if AGI is possible then we’re all on the right track towards AGI and if not, atleast eventually find the right way towards the idea of humans and robots coexisting to make the world a better place to live for us as well as the generations to come!

I am sharing a couple of links and videos that people interested in learning about AGI should definitely check out –

Other articles that you might like — https://medium.com/@raksham_p

Stay tuned for more posts on Artificial Intelligence coming up very soon! 🙂


The AGI/Deep Learning Connection was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Overfitting vs. Underfitting: A Complete Example

Exploring and solving a fundamental data science problemWhen you study data science you come to realize there are no truly complex ideas, just many simple building blocks combined together. A neural network may seem extremely advanced, but it’s really …