Deep learning is interesting in many ways. But when you consider to do it in thousands of cores that can process millions of parameters, then the problem is more interesting and complex at the same time.
Google Datacenter (via Google)
Google was doing an interesting experiment, training a deep network with millions of parameters in thousands of CPUs. The goal was to train very large datasets without to limit the form of the model.
The paper describes the use of DistBelief, a framework created for distributed parallel computing applied to deep learning training. A collection of the features that the framework manage by itself are:
The framework automatically parallelises computation in each machine using all available core, and manages communication, synchronisation and data transfer between machines during both training and inference.
I couldn’t find too much information about it, only what it is written in the paper.
They have applied two algorithms: SGD (Stochastic Gradient Descent) and L-BFGS. These algorithms usually works well, but they doesn’t scale with very large data sets. That is because they introduce some modifications to them. The paper gives you more details about the optimisations in both algorithms that you can find interesting.
I was found really interesting the idea of distributed parallel computing working for very large datasets in such algorithms.
I was wondering how to do some text classification with Java and Apache Mahout. Isabel Drost-Fromm gave a talk in the LuceneSolrRevolution Conference (Dublin – 2013) where she was speaking about the topic, how Apache Mahout and Lucene could help you.
It is a good an introduction to the topic. I have enjoyed too much what it was presented in the talk.
Lucene, Mahout and Hadoop (only a little bit) sound really great for a talk about how to do texts classifications.
The general idea behind the complete process to classify documents will follow the below steps:
HTML >> Apache Tika
Fulltext >> Lucene Analyzer
Tokenstream >> FeatureVectorEnconder
Vector >> Online Learner
Of course Isabel was giving the advice of reuse the libraries that you have in your hands, take an internal look to the algorithms used there and improve them, if you need it. As a first approach it is really good for me to see how things work.
Mahout is a really good library for machine learning, it was using map reduce to perfectly integrate with Hadoop (v1.0), although from April of 2014 they have decided to move forward:
The Mahout community decided to move its codebase onto modern data processing systems that offer a richer programming model and more efficient execution than Hadoop MapReduce. (You can read that in there web site).
At the end of the video there is a recommendation to everyone to participate in the project: bug fixing, documentation, reporting bugs… There are a lot of things to do in open source projects always. If you are using the libraries there, I recommend you to subscribe to the mailing lists if you are interested in the project.
I really recommend you to see the video if you are interested in the field, I think she was giving a good talk about a good topic. You can take a look to the slides too.
There are a lot of books in the field of Machine Learning, just a fast search in Amazon gives you more than 25.ooo books. I wanted to filter all those books an choose the most useful. I was looking in google, quora and reading some post that I found around internet. There a lot of people giving a list of 10 – 20 books about machine learning, statistical learning, reinforcement learning… I just wanted to find the two interesting books to go into the field.
With these books, it is possible to learn general aspects about the topic and later go more in deep in the part that sounds more interesting.
The author is Christopher M. Bishop, a Distinguished Scientist at Microsoft Research Cambridge, where he leads the Machine Learning and Perception group
This book will give you a really good approach to the commonly used algorithms in Machine Learning.
Both books are theoretical and will give you a good introduction. Of course there so many books in the area, some of then more practical, some about statistical learning… But I think it is good to have a simple point to start.
I have started with Tom M. Mitchell’s book. I will give you my impression when I have finished it.
When you have done so many projects from scratch, using legacy code… different type of customers: banks, telecoms, retail, … in different type of companies (big, small, startups…), you always move to the next project thinking that you will do better the next time. But, how?
For me Scrum changed how things could be done better. Agile is what this book is about, but I, personally, feel that both are really connected. The names of the meetings or events are different but, how they are organize look similar.
The book is about how to execute your projects in a way that your customer feel more confident about the job that your are doing. It is not only about agile, it is about how to execute projects in a way that we can deal with changes and still have quality; having immediate feedback about the current status; how to be ready for production from the beginning.
Not all the customers are the same, not all the product owners are the same, not all the companies are the same, in conclusion: not all the XXX are the same.
I like the idea of Inception Desk. It is really good to have everyone in the team working in the big picture as an approach to start. As a mirror where everybody is looking how the project look for him and how the things are going to be. After that you can start and change the things later, if you need it.
In general the book is good for: to feel how it could be if you organize the project in a agile way; what are going to be the problems; how you could engage the customer/product owner; how the team should work; how testing and continous deployment should work; how transparent is going to be the status of the project; how you deal with the changes from the beginning… A lot of things together in a few pages :).
So many times, I find myself writing a list of articles to read, writing notes about them or about books… but, not always, I write those notes in the same place. I took the decision that it is good if I put all of those notes together in the same place.
One time ago, I used to write using tumblr, but it have not been updated a long time.
I needed to put all those interesting notes, comments, ideas, investigations together. I think the evolution of those ideas, the experiences, the articles read… every of those could be interesting to share & collect here.