Posts

Showing posts with the label ai/ml/dl

Azure ML vs Databricks for deploying machine learning models

Azure Machine Learning (Azure ML) and Databricks Machine Learning (Databricks ML) are two popular cloud-based platforms for data scientists. Both offer a range of tools and services for building and deploying machine learning models at scale. In this blog post, we'll compare Azure ML and Databricks ML, examining their features and capabilities, and highlighting their differences.   Experimentation Azure ML The Python API allows you to easily  create experiments   that you can then track from the UI. You can do interactive runs from a Notebook. Logging metrics in this experiments still relies on the MlFlow client. Databricks ML Create experiments is easy also with the  MLFlow API    and  Databricks UI   . Tracking metrics is really nice with the MLFlow API (so nice that AzureML also uses this client for their model tracking). Winner They are both pretty much paired on this, although the fact that AzureML uses MLFlow (a Databricks product) maybe gives the edge to Databricks. Model Ve

Cleaning messy pose estimation

Image
There exist several libraries to perform pose estimation. However, the pose estimation output can be messy because of missing frames and incorrect detection and sometimes needs to be cleaned to get the best quality. I've implemented a simple pose cleaning method to improve the quality of the pose data to be used for my project and would like to share how I did. The code here assumes that the data is single-person pose estimation from AlphaPose, so feel free to adapt it if you need it for other use cases. The pose estimation library I used is AlphaPose, but this can be applied to other libraries' output as well. In order to clean up the messy pose estimation, we need to: 1) Find correction target - Find missing frames - Find incorrect detection 2) Fix missing, incorrect frames The full code of the pose cleaning can be found  here . Find correction target In AlphaPose pose estimation output, there's "image_id" for each frame so we can easily find missing frames by c

OpenPose vs. AlphaPose, Which one is better?

Image
Pose estimation is estimating the configuration of the human body from an image or a video.  For my project, I had to extract pose sequences from videos of a single person dancing. There are several pose estimation open source libraries available, but I was not sure which one performs the best and is the most suitable for my project. I tried two of the most renowned libraries - OpenPose and AlphaPose - and compared the results. Short conclusion, in my case, was that AlphaPose was better than OpenPose. Please note that this experiment is case specific, and the result can vary depending on your data, dynamics of your videos, the number of people to estimate poses, etc. What is OpenPose / AlphaPose? OpenPose is a multi-person 2D pose estimation system to detect the human body and hand, facial, and foot keypoints. It takes a bottom-up approach by using Part Affinity Field (PAFs) which is information about limbs from the image . Because OpenPose is only based on a single frame, it shows goo