A majority of the work that we do as programmers involves data manipulation in some manner. This can range from large scale collection, aggregation, and statistical analysis across distrbuted systems, or it can be as simple as making a graph in a spreadsheet. In the middle of that range is the general task of ETL (Extract, Transform, and Load) which has its own range of scale. In this episode Romain Dorgueil discusses his experiences building ETL systems and the problems that he routinely encountered that led him to creating Bonobo, a lightweight, easy to use toolkit for data processing in Python 3. He also explains how the system works under the hood, how you can use it for your projects, and what he has planned for the future.
Data mining and visualization are important skills to have in the modern era, regardless of your job responsibilities. In order to make it easier to learn and use these techniques and technologies Blaž Zupan and Janez Demšar, along with many others, have created Orange. In this episode they explain how they built a visual programming interface for creating data analysis and machine learning workflows to simplify the work of gaining insights from the myriad data sources that are available. They discuss the history of the project, how it is built, the challenges that they have faced, and how they plan on growing and improving it in the future.
A majority of projects will eventually need some way of managing periodic or long-running tasks outside of the context of the main application. This is where a distributed task queue becomes useful. For many in the Python community the standard option is Celery, though there are other projects to choose from. This week Bogdan Popa explains why he was dissatisfied with the current landscape of task queues and the features that he decided to focus on while building Dramatiq, a new, opinionated distributed task queue for Python 3. He also describes how it is designed, how you can start using it, and what he has planned for the future.
Jake Vanderplas is an astronomer by training and a prolific contributor to the Python data science ecosystem. His current role is using Python to teach principles of data analysis and data visualization to students and researchers at the University of Washington. In this episode he discusses how he got started with Python, the challenges of teaching best practices for software engineering and reproducible analysis, and how easy to use tools for data visualization can help democratize access to, and understanding of, data.
Kenneth Reitz has contributed many things to the Python community, including projects such as Requests, Pipenv, and Maya. He also started the community written Hitchhiker’s Guide to Python, and serves on the board of the Python Software Foundation. This week he talks about his career in the Python community and digs into some of his current work.
As we rely more on small, distributed processes for building our applications, being able to take advantage of asynchronous I/O is increasingly important for performance. This week Alex Grönholm explains how the Asphalt Framework was created to make it easier to build these network oriented software stacks and the technical challenges that he faced in the process.
The importance of testing your software is widely talked about and well understood. What is not as often discussed is the different types of testing, and how end-to-end tests can benefit your team to ensure proper functioning of your application when it gets released to production. This week Luciano Puccio shares the work that he has done on Golem, a framework for building and executing an automation suite to exercise the entire system from the perspective of the user. He discusses his reasons for creating the project, how he things about testing, and where he plans on taking Golem in the future. Give it a listen and then take it for a test drive.
Do you know what is happening in your production systems right now? If you have a comprehensive metrics platform then the answer is yes. If your answer is no, then this episode is for you. Jason Dixon and Dan Cech, core maintainers of the Graphite project, talk about how graphite is architected to capture your time series data and give you the ability to use it for answering questions. They cover the challenges that have been faced in evolving the project, the strengths that have let it stand the tests of time, and the features that will be coming in future releases.
A relevant and timely recommendation can be a pleasant surprise that will delight your users. Unfortunately it can be difficult to build a system that will produce useful suggestions, which is why this week’s guest, Nicolas Hug, built a library to help with developing and testing collaborative recommendation algorithms. He explains how he took the code he wrote for his PhD thesis and cleaned it up to release as an open source library and his plans for future development on it.
With the proliferation of messaging applications, there has been a growing demand for bots that can understand our wishes and perform our bidding. The rise of artificial intelligence has brought the capacity for understanding human language. Combining these two trends gives us chatbots that can be used as a new interface to the software and services that we depend on. This week Joey Faulkner shares his work with Rasa Technologies and their open sourced libraries for understanding natural language and how to conduct a conversation. We talked about how the Rasa Core and Rasa NLU libraries work and how you can use them to replace your dependence on API services and own your data.