The Past, Present, and Future of Twisted with Moshe Zadka - Episode 170

Twisted is one of the earliest frameworks for developing asynchronous applications in Python and it has yet to fulfill its original purpose. It can be used to build network servers that integrate a multitude of protocols, increase the performance of your I/O bound applications, serve as the full web stack for your WSGI projects, and anything else that needs a battle tested and performant foundation. In this episode long time maintainer Moshe Zadka discusses the history of Twisted, how it has evolved over the years, the transition to Python 3, some of its myriad use cases, and where it is headed in the future. Try it out today and then send some thanks to all of the people who have dedicated their time to building it.

Mike Driscoll And His Career In Python - Episode 169

Mike Driscoll has been writing blogs and books for the Python community for years, including his popular series on the Python Module Of The Week. In his daily work he uses Python to test graphical interfaces written in C++ and QT for embedded platforms. In this episode he explains his work, how he got involved in writing as a regular exercise, and an overview of his recent books.

The Pulp Artifact Repository with Bihan Zhang and Austin Macdonald - Episode 168

Hosting your own artifact repositories can have a huge impact on the reliability of your production systems. It reduces your reliance on the availability of external services during deployments and ensures that you have access to a consistent set of dependencies with known versions. Many repositories only support one type of package, thereby requiring multiple systems to be maintained, but Pulp is a platform that handles multiple content types and is easily extendable to manage everything you need for running your applications. In this episode maintainers Bihan Zhang and Austin Macdonald explain how the Pulp project works, the exciting new changes coming in version 3, and how you can get it set up to use for your deployments today.

Bringing Africa Online At Ascoderu with Clemens Wolff - Episode 167

The future is here, it’s just not evenly distributed. One of the places where this is especially true is in sub-Saharan Africa which is a vast region with little to no reliable internet connectivity. To help communities in this region leapfrog infrastructure challenges and gain access to opportunities for education and market information the Ascoderu non-profit has built Lokole. In this episode one of the lead engineers on the project, Clemens Wolff, explains what it is, how it is built, and how the venerable e-mail protocols can continue to provide access cheaply and reliably.

Understanding Machine Learning Through Visualizations with Benjamin Bengfort and Rebecca Bilbro - Episode 166

Machine learning models are often inscrutable and it can be difficult to know whether you are making progress. To improve feedback and speed up iteration cycles Benjamin Bengfort and Rebecca Bilbro built Yellowbrick to easily generate visualizations of model performance. In this episode they explain how to use Yellowbrick in the process of building a machine learning project, how it aids in understanding how different parameters impact the outcome, and the improved understanding among teammates that it creates. They also explain how it integrates with the scikit-learn API, the difficulty of producing effective visualizations, and future plans for improvement and new features.

Modern Database Clients On The Command Line with Amjith Ramanujam - Episode 165

The command line is a powerful and resilient interface for getting work done, but the user experience is often lacking. This can be especially pronounced in database clients because of the amount of information being transferred and examined. To help improve the utility of these interfaces Amjith Ramanujam built PGCLI, quickly followed by MyCLI with the Prompt Toolkit library. In this episode he describes his motivation for building these projects, how their popularity led him to create even more clients, and how these tools can help you in your command line adventures.

Pandas Extension Arrays with Tom Augspurger - Episode 164

Pandas is a swiss army knife for data processing in Python but it has long been difficult to customize. In the latest release there is now an extension interface for adding custom data types with namespaced APIs. This allows for building and combining domain specific use cases and alternative storage mechanisms. In this episode Tom Augspurger describes how the new ExtensionArray works, how it came to be, and how you can start building your own extensions today.

Making A Difference Through Software With Eric Schles - Episode 163

Software development is a skill that can create value and reduce drudgery in a wide variety of contexts. Sometimes the causes that are most in need of software expertise are also the least able to pay for it. By volunteering our time and abilities to causes that we believe in, we can help make a tangible difference in the world. In this episode Eric Schles describes his experiences working on social justice initiatives and the types of work that proved to be the most helpful to the groups that he was working with.

Asking Questions From Data Using Active Learning with Tivadar Danka - Episode 162

One of the challenges of machine learning is obtaining large enough volumes of well labelled data. An approach to mitigate the effort required for labelling data sets is active learning, in which outliers are identified and labelled by domain experts. In this episode Tivadar Danka describes how he built modAL to bring active learning to bioinformatics. He is using it for doing human in the loop training of models to detect cell phenotypes with massive unlabelled datasets. He explains how the library works, how he designed it to be modular for a broad set of use cases, and how you can use it for training models of your own.

Great Expectations For Your Data Pipelines with Abe Gong and James Campbell - Episode 161

Testing is a critical activity in all software projects, but one that is often neglected in data pipelines. The complexities introduced by the inherent statefulness of the problem domain and the interdependencies between systems contribute to make pipeline testing difficult to manage. To make this endeavor more manageable Abe Gong and James Campbell have created Great Expectations. In this episode they discuss how you can use the project to create tests in the exploratory phase of building a pipeline and leverage those to monitor your systems in production. They also discussed how Great Expectations works, the difficulties associated with pipeline testing and managing associated technical debt, and their future plans for the project.