Human Pose Estimation (HPE) is a Computer Vision problem that has become increasingly popular over the last few years, with multiple applications in the medical field such as therapy using virtual and augmented reality, robot caregivers, virtual physical therapy and kinematic analysis. Nevertheless, all the machine learning algorithms developed for these applications are trained in small datasets with images captured in constrained scenarios and with information given by sensors, bounding the applicability of these methods. We developed a simple yet useful deep learning algorithm for Human Pose Estimation that uses as input only an image of a scene with people. The estimated position of the joints and body parts can be used to retrieve basic kinematic information from the people on the image that can be applied to the aforementioned medical applications. We focus on overcoming the limit of Human Pose Estimation algorithms due to jittering, aiming to preserve more precise pixel location. Thus, we explore different novel approaches to improve the precision of the existing state- of-the-art algorithms in keypoint estimation and evaluate them on COCO keypoint dataset, outperforming the current top methods. We hope our algorithm encourages the academic community to develop simpler but precise HPE algorithms for medical applications based on RGB images.
Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.