I graduated from the African Masters in Machine Intelligence (AMMI) hosted by AIMS in patnership with Facebook and Google based in Kigali, Rwanda. Before that I graduated from AIMS- Senegal in February 2018 with a Master degree in Mathematical science where my Master Thesis was on Web Recommendation systems for jobs seekers and recruiters and it is available on Google scholar. I am passionate about Theoretical Deep learning with applications in Visual Recognition, Reinforcement Learning and some stuffs related to Gaussian processes. For more informations about Books on Machine Learning and Deep Learning visit this learning-resources page.
Here you can find some tutorials on machine learning and deep learning for dummies for those who are new in the field. Stay tuned on my blog. But for now you can have a look at samples of some codes here that I did for my assignments at AMMI for the first semester. That part is mostly about Foundations of Machine Learning.
For deep learning Tutorials, you can have a look at sample of my deep learning code that I did for my coursework during the first semester here
For reinforcement learning Tutorials, you can have a look at sample of my reinforcement learning basic code that I did for Q-Learning Algorithm here or for more codes you can have a look at samples of some codes here
This course covers basic concepts in machine learning in high dimension, and the importance of regularization. We study in detail high-dimensional linear models regularized by the Euclidean norm, including ridge regression, ridge logistic regression and support vector machines. We then show how positive definite kernels allows to transform these linear models into rich nonlinear models, usable even for non-vectorial data such as strings and graphs, and convenient for integrating heterogeneous data. For kernel methods Tutorials, you can have a look at sample of my code that I did for my coursework during the second semester here
Many estimation problems in statistical learning are formulated as optimization problems. These formulations allowed a separation between the analysis of the estimator’s performance and the development of problem-solving algorithms. Faced with large volumes of data, such separation is no longer effective and analysis must combine statistics and optimization. In this course, the classic statistical results on M-estimation and convex optimization will be presented in a unified way, showing the links between statistics and optimization. Emphasis will be placed on non-asymptotic analysis of stochastic approximation and stochastic gradient.
Attention, the purpose of this course is to understand how algorithms are analyzed and will therefore be more accessible on the evidence. For optimization Tutorials, you can have a look at sample of my code that I did for my coursework during the second semester here.
This course provides a unifying introduction to probabilistic modelling through the framework of graphical models, together with their associated learning and inference algorithms. For Probabilistic Graphical Models Tutorials, you can have a look at sample of my code that I did for my coursework during the second semester here
Others amazing courses that we have done at AMMI include :
For additional information please check:
my publications on Google scholar
some of my commented code on Github