One of the challenges of machine learning is obtaining large enough volumes of well labelled data. An approach to mitigate the effort required for labelling data sets is active learning, in which outliers are identified and labelled by domain experts. In this episode Tivadar Danka describes how he built modAL to bring active learning to bioinformatics. He is using it for doing human in the loop training of models to detect cell phenotypes with massive unlabelled datasets. He explains how the library works, how he designed it to be modular for a broad set of use cases, and how you can use it for training models of your own.
With libraries such as Tensorflow, PyTorch, scikit-learn, and MXNet being released it is easier than ever to start a deep learning project. Unfortunately, it is still difficult to manage scaling and reproduction of training for these projects. Mourad Mourafiq built Polyaxon on top of Kubernetes to address this shortcoming. In this episode he shares his reasons for starting the project, how it works, and how you can start using it today.
One of the biggest issues facing us is the availability of sustainable energy sources. As individuals and energy consumers it is often difficult to understand how we can make informed choices about energy use to reduce our impact on the environment. Electricity Map is a project that provides up to date and historical information about the balance of how the energy we are using is being produced. In this episode Olivier Corradi discusses his motivation for creating Electricity Map, how it is built, and his goals for the project and his other work at Tomorrow Co.
Making computers identify and understand what they are looking at in digital images is an ongoing challenge. Recent years have seen notable increases in the accuracy and speed of object detection due to deep learning and new applications of neural networks. In order to make it easier for developers to take advantage of these techniques Tryo Labs built Luminoth. In this interview Joaquin Alori explains how how Luminoth works, how it can be used in your projects, and how it compares to API oriented services for computer vision.
Learning how to read is one of the most important steps in empowering someone to build a successful future. In developing nations, access to teachers and classrooms is not universally available so the Global Learning XPRIZE serves to incentivize the creation of technology that provides children with the tools necessary to teach themselves literacy. Kjell Wooding helped create Learn Leap Fly in order to participate in the competition and used Python and Kivy to build a platform for children to develop their reading skills in a fun and engaging environment. In this episode he discusses his experience participating in the XPRIZE competition, how he and his team built what is now Kasuku Stories, and how Python and its ecosystem helped make it possible.
Data mining and visualization are important skills to have in the modern era, regardless of your job responsibilities. In order to make it easier to learn and use these techniques and technologies Blaž Zupan and Janez Demšar, along with many others, have created Orange. In this episode they explain how they built a visual programming interface for creating data analysis and machine learning workflows to simplify the work of gaining insights from the myriad data sources that are available. They discuss the history of the project, how it is built, the challenges that they have faced, and how they plan on growing and improving it in the future.
A relevant and timely recommendation can be a pleasant surprise that will delight your users. Unfortunately it can be difficult to build a system that will produce useful suggestions, which is why this week’s guest, Nicolas Hug, built a library to help with developing and testing collaborative recommendation algorithms. He explains how he took the code he wrote for his PhD thesis and cleaned it up to release as an open source library and his plans for future development on it.
With the proliferation of messaging applications, there has been a growing demand for bots that can understand our wishes and perform our bidding. The rise of artificial intelligence has brought the capacity for understanding human language. Combining these two trends gives us chatbots that can be used as a new interface to the software and services that we depend on. This week Joey Faulkner shares his work with Rasa Technologies and their open sourced libraries for understanding natural language and how to conduct a conversation. We talked about how the Rasa Core and Rasa NLU libraries work and how you can use them to replace your dependence on API services and own your data.
Do you wish that you had a self-driving car of your own? With Donkey you can make that dream a reality. This week Will Roscoe shares the story of how he got involved in the arena of self-driving car hobbyists and ended up building a Python library to act as his pilot. We talked about the hardware involved, how he has evolved the code to meet unexpected challenges, and how he plans to improve it in the future. So go build your own self driving car and take it for a spin!