Of Checklists, Ethics, and Data with Emily Miller and Peter Bull - Episode 184

As data science becomes more widespread and has a bigger impact on the lives of people, it is important that those projects and products are built with a conscious consideration of ethics. Keeping ethical principles in mind throughout the lifecycle of a data project helps to reduce the overall effort of preventing negative outcomes from the use of the final product. Emily Miller and Peter Bull of Driven Data have created Deon to improve the communication and conversation around ethics among and between data teams. It is a Python project that generates a checklist of common concerns for data oriented projects at the various stages of the lifecycle where they should be considered. In this episode they discuss their motivation for creating the project, the challenges and benefits of maintaining such a checklist, and how you can start using it today.

How Python Is Used To Build A Startup At Wanderu with Chris Kirkos and Matt Warren - Episode 183

The breadth of use cases that Python supports, coupled with the level of productivity that it provides through its ease of use have contributed to the incredible popularity of the language. To explore the ways that it can contribute to the success of a young and growing startup two of the lead engineers at Wanderu discuss their experiences in this episode. Matt Warren, the technical operations lead, explains the ways that he is using Python to build and scale the infrastructure that Wanderu relies on, as well as the ways that he deploys and runs the various Python applications that power the business. Chris Kirkos, the lead software architect, describes how the original Django application has grown into a suite of microservices, where they have opted to use a different language and why, and how Python is still being used for critical business needs. This is a great conversation for understanding the business impact of the Python language and ecosystem.

Understanding Machine Learning Through Visualizations with Benjamin Bengfort and Rebecca Bilbro - Episode 166

Machine learning models are often inscrutable and it can be difficult to know whether you are making progress. To improve feedback and speed up iteration cycles Benjamin Bengfort and Rebecca Bilbro built Yellowbrick to easily generate visualizations of model performance. In this episode they explain how to use Yellowbrick in the process of building a machine learning project, how it aids in understanding how different parameters impact the outcome, and the improved understanding among teammates that it creates. They also explain how it integrates with the scikit-learn API, the difficulty of producing effective visualizations, and future plans for improvement and new features.

Pandas Extension Arrays with Tom Augspurger - Episode 164

Pandas is a swiss army knife for data processing in Python but it has long been difficult to customize. In the latest release there is now an extension interface for adding custom data types with namespaced APIs. This allows for building and combining domain specific use cases and alternative storage mechanisms. In this episode Tom Augspurger describes how the new ExtensionArray works, how it came to be, and how you can start building your own extensions today.

Asking Questions From Data Using Active Learning with Tivadar Danka - Episode 162

One of the challenges of machine learning is obtaining large enough volumes of well labelled data. An approach to mitigate the effort required for labelling data sets is active learning, in which outliers are identified and labelled by domain experts. In this episode Tivadar Danka describes how he built modAL to bring active learning to bioinformatics. He is using it for doing human in the loop training of models to detect cell phenotypes with massive unlabelled datasets. He explains how the library works, how he designed it to be modular for a broad set of use cases, and how you can use it for training models of your own.

Scaling Deep Learning Using Polyaxon with Mourad Mourafiq - Episode 158

With libraries such as Tensorflow, PyTorch, scikit-learn, and MXNet being released it is easier than ever to start a deep learning project. Unfortunately, it is still difficult to manage scaling and reproduction of training for these projects. Mourad Mourafiq built Polyaxon on top of Kubernetes to address this shortcoming. In this episode he shares his reasons for starting the project, how it works, and how you can start using it today.

Luminoth: AI Powered Computer Vision for Python with Joaquin Alori - Episode 154

Making computers identify and understand what they are looking at in digital images is an ongoing challenge. Recent years have seen notable increases in the accuracy and speed of object detection due to deep learning and new applications of neural networks. In order to make it easier for developers to take advantage of these techniques Tryo Labs built Luminoth. In this interview Joaquin Alori explains how how Luminoth works, how it can be used in your projects, and how it compares to API oriented services for computer vision.

PyRay: Pure Python 3D Rendering with Rohit Pandey - Episode 147

Using a rendering library can be a difficult task due to dependency issues and complicated APIs. Rohit Pandey wrote PyRay to address these issues in a pure Python library. In this episode he explains how he uses it to gain a more thorough understanding of mathematical models, how it compares to other options, and how you can use it for creating your own videos and GIFs.

Learn Leap Fly: Using Python To Promote Global Literacy with Kjell Wooding - Episode 145

Learning how to read is one of the most important steps in empowering someone to build a successful future. In developing nations, access to teachers and classrooms is not universally available so the Global Learning XPRIZE serves to incentivize the creation of technology that provides children with the tools necessary to teach themselves literacy. Kjell Wooding helped create Learn Leap Fly in order to participate in the competition and used Python and Kivy to build a platform for children to develop their reading skills in a fun and engaging environment. In this episode he discusses his experience participating in the XPRIZE competition, how he and his team built what is now Kasuku Stories, and how Python and its ecosystem helped make it possible.

Orange: Visual Data Mining Toolkit with Janez Demšar and Blaž Zupan - Episode 142

Data mining and visualization are important skills to have in the modern era, regardless of your job responsibilities. In order to make it easier to learn and use these techniques and technologies Blaž Zupan and Janez Demšar, along with many others, have created Orange. In this episode they explain how they built a visual programming interface for creating data analysis and machine learning workflows to simplify the work of gaining insights from the myriad data sources that are available. They discuss the history of the project, how it is built, the challenges that they have faced, and how they plan on growing and improving it in the future.