MIPAR
  • Products
    • Analysis Software
    • Microscope Software
    • Compliance Software
    • Deep Learning
    • APIs
    • MIPAR for Academics
    • Request a Quote
  • Applications
    • Materials >
      • Purchase
    • Life Science
    • Manufacturing
    • BioMedical
    • Drone & Survey
    • Recipe Templates
    • Articles & Citations
    • Webinars
  • Spotlight ✧˙˖
  • About
    • Who we are
    • Data Privacy
    • Find a Distributor
    • Newsletter
  • User Resources
    • Udemy Course
    • Help desk
    • Forum
    • Tutorials
    • FAQ
    • User Manual
  • Download Trial

Automated Skin Cell Detection and Morphometry

A HIROX and MIPAR Integrated Solution

Picture
Picture
Abstract

This case study presents an integrated workflow for automated skin cell detection and
morphometry, combining HIROX high-resolution digital microscopy with MIPAR’s deep learning-powered image analysis software. The pipeline uses a SegNet semantic segmentation model trained on four pixel classes (cell interior, cell wall, bright background, and dark background), followed by morphological centroid extraction, Spotlight boundary detection, automated artifact rejection, and operator-supervised single-click review. The system is designed to minimize false positives on the first pass, reducing the operator’s role to adding missed cells rather than removing incorrect detections. Internal testing demonstrated a reduction in per-image analysis time from 5 to 15 minutes (fully manual) to 1 to 2 minutes with the automated pipeline. All results, including per-cell count and area measurements, are exported in open-source CSV format for downstream processing and stored in traceable session files for regulatory compliance.

Summary
Challenge

Manual skin cell quantification is limited by inter-observer variability, low throughput, and poor repeatability. When analysts count cells by hand, results vary between operators and even between sessions for the same operator. Imaging artifacts, overlapping cell boundaries, and high-confluence regions compound the difficulty, making consistent area measurement nearly impossible without semantic segmentation. Any automated solution must still allow human oversight of the detection results. Still, the workflow should be designed so that the operator’s role is to add missed cells rather than remove false detections, reducing review burden and improving confidence in the final count.

Picture
Figure 1. Raw skin cell image (left) and MIPAR Spotlight detection overlay with color-coded cell segmentation (right). Colors represent individual cell area measurements.
Solution

The solution pairs HIROX HRX-01 digital microscope optics with MIPAR’s automated image analysis software to create an end-to-end workflow for skin cell detection. MIPAR’s analysis pipeline integrates deep-learning-based semantic segmentation with its Spotlight boundary-detection engine, providing both cell-count and cell-area measurements. The workflow is designed for operator oversight. A built-in manual supervision tool (Snap) allows single-click correction of missed cells. All measurement data is exported in open-source CSV format for downstream processing. 

Key Metrics

The primary metric for this workflow is detection accuracy, with an explicit emphasis on minimizing false positives over minimizing false negatives. This design choice means the automated first pass produces high-confidence detections, and the operator’s review task is limited to adding any cells the model missed rather than removing incorrectly detected artifacts.  Internal testing showed a reduction in per-image analysis time from 5–15 minutes (fully manual)  to 1–2 minutes with the MIPAR-powered pipeline, depending on cell density and initial detection accuracy.

Introduction: The Need for Automated Skin Cytometry
Context

Quantitative analysis of skin cell morphology is central to dermatological drug screening, wound healing studies, and tissue engineering quality control. Cell count, individual cell area, and population-level size distributions serve as primary endpoints in evaluating culture viability,  treatment response, and graft readiness. As experimental throughput increases, driven by high-content screening platforms and multi-well culture systems, the need for reliable, scalable image analysis has outpaced what manual methods can deliver. 

Current Bottlenecks

Manual cell counting is time-intensive and inherently subjective. Inter-observer variability increases with cell confluence, where overlapping boundaries make it difficult to distinguish individual cells consistently. Traditional automated approaches based on global intensity thresholding or watershed segmentation struggle with the heterogeneous contrast conditions found in skin cell cultures, particularly when imaging artifacts, debris, or non-uniform illumination are present. These methods tend to either over-segment dense regions (inflating cell count) or under-segment them (merging adjacent cells), and they provide no mechanism for operator review without restarting the analysis. The result is a tradeoff between throughput and accuracy that neither manual nor simple threshold-based methods resolve satisfactorily. 

Hardware Configuration & Image Acquisition 

The imaging system used in this study is built around the HIROX HRX-01, an ultra-high resolution digital microscope with a portable main control unit. The system was configured with the following components.

Picture
Figure 2. HIROX HRX-01 digital microscope system with portable control unit, 3D mouse controller, 4K monitor, and motorized XY stage.
Optical Setup

The images used for developing and validating the MIPAR analysis pipeline were acquired using the HR-2500 telecentric zoom lens at 350x magnification. The HR-2500 is paired with the  AC-1020D diffusing adapter, which provides uniform illumination across the field of view. The final production configuration uses the HR-5000 (20-5000x) turret zoom lens, which offers a broader magnification range for flexibility across different sample types and imaging requirements. Both lens configurations use the HR-1020E tele-centric ultra-high resolution auto zoom lens body (10-200x base range).

System Configuration

The microscope is mounted on the ST-AS high precision free-angle stand with motorized Z-axis,  providing repeatable positioning for multi-sample workflows. Sample positioning is handled by the AS-100 (100x100 mm) auto XY stage. The system includes the HRS-3D advanced 3D  application software and XY-3DMP 3D mouse pro/remote device for intuitive navigation and capture control. The deployed configuration also includes a workstation PC with 4K resolution monitor for high-fidelity image review. 

Consistency

The tele-centric lens design ensures consistent magnification across the depth of field,  eliminating perspective distortion that can affect cell area measurements in nontele-centric systems. The AC-1020D diffusing adapter provides uniform illumination across the entire field of view, which is critical for the downstream deep learning segmentation pipeline. Non-uniform illumination introduces local contrast variations that can cause false detections or missed cells in intensity-dependent analysis methods. The motorized Z-axis and auto XY stage enable repeatable positioning across imaging sessions, supporting batch processing workflows where consistency between images is essential.

Analytical Methodology
Segmentation Strategy

The detection pipeline uses a multi-stage approach that combines deep learning segmentation with classical morphological processing and MIPAR’s Spotlight boundary detection engine.

Picture
Figure 3. Methods at a glance. The five-stage detection pipeline from raw image input through automated segmentation and operator review to final cell count and area measurements. Purple: deep learning stages. Blue: morphological processing. Green: Spotlight-powered detection and review.


​Stage 1 – Semantic Segmentation (SegNet). A SegNet deep learning model was trained to classify each pixel into one of four classes: cell interior, cell wall, bright background, and dark background. This four-class formulation is critical. Rather than a simple binary cell/background classification, the explicit cell wall class provides the boundary information needed for accurate separation of adjacent cells, and the two background classes enable robust artifact rejection regardless of local illumination conditions. 

Stage 2 – Centroid Extraction. The cell wall confidence map from the SegNet model is processed through a series of binary morphology operations (erosion, dilation, and connected component analysis) to identify cell centroids. These centroids serve as seed points for the next stage and represent high-confidence cell locations. 

Stage 3 – Spotlight Boundary Detection. The extracted centroids are used as seeds for  MIPAR’s Spotlight engine, which performs precise boundary detection around each cell. This produces accurate per-cell segmentation masks suitable for area measurement, a key requirement since cell area is a primary reported metric. 

Stage 4 – Artifact Rejection. The background confidence map (combining both bright and dark background classes) is subtracted from the detection results, removing any regions that the model identified as non-cellular material. This step ensures that debris, scratches, and other imaging artifacts do not contribute to the final cell count or area measurements. 


Stage 5 – Manual Supervision (Snap). After automated processing, the operator reviews the detection overlay. MIPAR’s Snap tool, powered by the same Spotlight engine, allows single-click addition of any cells the automated pipeline missed. Because the pipeline is tuned to minimize false positives, the operator’s task is exclusively additive: clicking to include missed cells rather than manually deleting false detections. This design significantly reduces review time and cognitive load.

Feature Extraction

Two primary measurements are extracted from the segmentation results: cell count and individual cell area. Because the Spotlight-based boundary detection produces per-cell segmentation masks (rather than a single binary foreground mask), area measurements are computed directly from the pixel count of each labeled region, calibrated to the image’s spatial resolution. Accurate boundary delineation is therefore a prerequisite for meaningful area data.  Over-segmentation inflates the count while deflating the per-cell area, and under-segmentation does the reverse. 

Results & Validation
Accuracy vs. Ground Truth 

No formal gauge repeatability and reproducibility (GR&R) study was performed comparing the automated pipeline against expert manual counts for this case study. Future validation work should include a multi-operator study with a defined ground truth dataset to quantify agreement metrics such as precision, recall, and F1 score at the per-cell level. 

Reproducibility

A formal reproducibility study has not yet been conducted. However, the deterministic nature of the automated pipeline (SegNet inference, morphological processing, and Spotlight boundary detection) ensures that identical input images will produce identical results without operator intervention. Variability between runs is introduced only during the manual Snap review step,  where different operators may add slightly different sets of missed cells. Quantifying this inter-operator variability through a structured GR&R study is a recommended next step. 

Throughput Gains

MIPAR Internal testing compared fully manual analysis against the MIPAR-powered pipeline across a representative set of images with varying cell densities. Manual analysis required approximately 5 to 15 minutes per image, with the wide range driven primarily by the number of cells present and the difficulty of resolving overlapping boundaries. The automated pipeline,  including the SegNet inference, morphological processing, Spotlight boundary detection, and operator review via Snap, reduced total analysis time to 1 to 2 minutes per image. The remaining time variability is attributable to the number of cells requiring manual addition during the Snap review step, which itself depends on initial detection accuracy for the given image. 

Data Integrity & Export
Reporting

MIPAR provides two primary report types for skin cell analysis. The Color by Measure report renders each detected cell with a color corresponding to its measured area, enabling rapid visual identification of size outliers and spatial patterns across the culture. The Summary report aggregates per-image statistics, including total cell count, mean cell area, area standard deviation, and area distribution histograms.

Picture
Figure 4. MIPAR Color by Measurements report. Left: per-cell area measurement table sorted by area. Center: detection overlay with color-coded cells and area labels. Right: area distribution histogram with fitted curve and summary statistics (mean, standard deviation, min, max).
Traceability

Data results can be saved in a ‘session’ file when batch processing. A session can be reloaded into MIPAR for review of detection accuracy. Measurement results are saved into a .CSV file for downstream processing. All results are stored in open source file formats, which can be reviewed outside of MIPAR if required.

Picture
Figure 5. MIPAR Summary Report showing raw image and detection overlay side by side, spatial calibration (scale factor and units), aggregate measurements (cell count and mean area), recipe version, and fields for comments and signature to support traceability requirements.
Conclusion

This case study demonstrates how pairing high-resolution digital microscopy with deep learning driven image analysis addresses the core limitations of manual skin cell quantification. The  HIROX and MIPAR workflow delivers consistent, operator-supervised cell detection with per-cell area measurement, reducing analysis time by approximately 75–90% compared to fully manual methods while maintaining the traceability and oversight that regulated environments demand.  By prioritizing precision over recall in the automated first pass and providing an intuitive single-click correction tool, the system minimizes both false positives and operator fatigue. All results are exported in open-source formats, ensuring compatibility with downstream statistical and data management pipelines. As imaging throughput continues to scale in dermatological research and tissue engineering, integrated acquisition-analysis solutions like this will be essential for translating raw microscopy data into actionable biological insights.

Contact

MIPAR Image Analysis Software
Web: www.mipar.us 

Email: [email protected] 
Phone: (614) 407-4510 

Hirox USA Inc. 
Website: hirox-usa.com 
Phone: (201) 342-2600 
Fax: (201) 342-7322 
​Email: [email protected]

Home

Support

Contact

© 2026 |  Privacy  |  Terms
  • Products
    • Analysis Software
    • Microscope Software
    • Compliance Software
    • Deep Learning
    • APIs
    • MIPAR for Academics
    • Request a Quote
  • Applications
    • Materials >
      • Purchase
    • Life Science
    • Manufacturing
    • BioMedical
    • Drone & Survey
    • Recipe Templates
    • Articles & Citations
    • Webinars
  • Spotlight ✧˙˖
  • About
    • Who we are
    • Data Privacy
    • Find a Distributor
    • Newsletter
  • User Resources
    • Udemy Course
    • Help desk
    • Forum
    • Tutorials
    • FAQ
    • User Manual
  • Download Trial