Posts

Showing posts from 2013

Bayesian Latent models: the shortest explanation ever

I found this on wikipedia and this is presumably the best and shortest explanatation I ever found about latent variable models and especially these Bayesian non-parametric models:

The Chinese Restaurant Process is often used to provide a prior distribution over assignments of objects to latent categories.The Indian buffet process is often used to provide a prior distribution over assignments of latent binary features to objects. From the first definition, you clearly see the construction of your prior over the space of categories. You understand that you have an infinitely sized space where all the possible combinations of categories exist and you are building a distribution over this space which will then be used as a prior for the variables describing your objects.

I let you think about the second definition, but think about an infinite collection of labels you can put or not on each object wheter it has the feature or not (and maybe my explanation is not as clear as the definition f…

Google Summer of Code 2013

I'm mentoring 2 students for the Google Summer of Code 2013, with my colleague Nasos who is a regular contributor to the project.
Boost has 7 students this year and at Boost.uBLAS, we're happy to say we have 2 students among the 7 !

One will work on implementing missing functions in BLAS 1,2 and 3 and introducing CPU level parallelism like auto-vectorization and SSE instructions (by hand).
The other one will work on bringing parralelism at the core level but especially at the network level to make Boost.uBLAS one of the only general-purpose linear algebra library than can distribute computations over a network using MPI.

Creating a BTRFS filesystem on 2 disks

I know it has nothing to do with Machine Learning, AI or even C++ coding but think about it, it's also part of the job. You just received a massive dataset and you only have small hard disks. In my case I have 2 disks of 400Gb. I know it's small by today's standard and I don't want to bother with a lot of partitions and complex tree structure, especially because I need to store my massive dataset that requires more than... 400 G.

With modern Linux there are at least 2 solutions:
LVM the Logical Volume ManagerBTRFS a new filesystem that offers incredible features
I have been using LVM for years, so I decided to give BTRFS a try. Here is my simple setup:

I have a desktop computer in which I've just added 2 hard disks of 400 GbI want to group them like they would be a single 800 Gb diskI want RAID0 for speed. RAID0 will split up the data on the 2 hard disks at the same time, making my new hard disk twice as fast as one single hard disk (this is theory, in practice, it i…