
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
![]()

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
Renuka Devi1 , Ritesh2 , Veeresh3 , Ajay Walikar4 , Praveen Kyade5
1Professor, Department of CSE, Government Engineering College, Bidar, Karnataka, India 2345Student, B.E, Government Engineering College, Bidar, Karnataka, India
Abstract - Early identification of structural anomalies in fetalbraintissueviaprenatalultrasonographyconstitutesa fundamental prerequisite for appropriate perinatal intervention strategies and informed parental decisionmaking. This investigation introduces a specialized deep learning framework leveraging the YOLO (You Only Look Once) real-time object detection paradigm, engineered to autonomously identify and spatially localize multiple categoriesoffetalcentralnervoussystemabnormalitiesinBmode ultrasound imagery. The proposed methodology incorporatesdomain-specificimagepreprocessingtechniques, hierarchical multi-scale feature representation, and customizedlossfunctionformulationsoptimizedformedical imaging contexts. Comprehensive evaluation across a clinically-sourced dataset comprising 2,847 annotated prenatalultrasoundstudiesdemonstratesdetectionaccuracy of 96.5%, precision of 95.8%, recall of 94.7%, and mean average precision of 91.8%, significantly surpassing establishedbaselineapproachesincludingFasterR-CNNand Mask R-CNN. The framework successfully identifies diverse pathological entities including ventricular enlargement (ventriculomegaly), agenesis of the corpus callosum, cerebellar developmental deficiency, open neural tube pathology, and choroid plexus cystic formations. The sub100ms inference latency enables practical deployment as a real-time clinical decision-support modality, facilitating standardized, operator-independent assessment protocols duringroutineprenatalscreening.Thiscontributionadvances theintersectionofdeeplearningmethodologiesandobstetric diagnosticimaging,establishingareproducible,scalable,and clinically-translatable framework for enhancing detection sensitivity and reducing inter-observer variability in fetal neurosonography.
Key Words: Fetal brain abnormality detection, YOLO, deep learning, medical image segmentation, computeraided diagnosis, prenatal diagnostics
Prenatal diagnosis of fetal brain structural anomalies is a crucial element of obstetric imaging. Early detection of abnormalities such as ventriculomegaly, corpus callosum agenesis, cerebellar hypoplasia, and neural tube defects supports appropriate maternal counseling and perinatal management. Conventional ultrasound-based diagnosis dependsheavilyonexpertinterpretation,whichintroduces
operator-dependent variability and limits consistency. Recent advances in deep learning, particularly real-time objectdetectionmethodslikeYOLO,offeranopportunityto automateandstandardizediagnosticworkflows.Thisstudy developsanoptimizedYOLO-basedframeworkdesignedfor rapidandaccuratedetectionoffetalbrainabnormalitiesin prenatalultrasoundimages.
Clinical motivation arises from limitations of current diagnosticpractices,includingprolongedexaminationtime, inter-observer variation, and risks of diagnostic oversight. The primary contributions of this research include: specialized YOLO optimization tailored for medical ultrasoundpreprocessing,extensivecomparativeevaluation againstFasterR-CNN,MaskR-CNN,andtraditionalmachinelearning baselines, rigorous validation using a dataset of 2,847 annotated clinical ultrasound examinations, and developmentofaclinicallyorientedintegrationframework forobstetricdiagnosticworkflows.
Recent advancements in convolutional neural network (CNN) architectures and single-stage object detection methodologieshavecatalyzeda paradigmshift in medical image analysis. The YOLO framework, first introduced by Redmon et al. and subsequently refined through multiple iterations(YOLOv3-v8),processesimagesinaunified,endto-endmannerratherthangeneratingintermediateregion proposals, thereby achieving substantial computational efficiency gains while maintaining or improving detection accuracymetrics.
Previousapplicationsofdeeplearningtofetalimaginghave predominantly employed two-stage detectors or standard CNN classification pipelines, which, while achieving reasonable accuracy levels (typically 85-92%), exhibit computational overhead unsuitable for real-time clinical workflows or require extensive post-processing. The emergence of optimized YOLO variants addresses these constraints through integrated architectural innovations including feature pyramid networks, spatial pyramid pooling, and enhanced loss functions. This investigation hypothesizesthatthoughtfullyadaptedYOLOarchitecture, coupled with preprocessing optimizations specific to

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
obstetric ultrasound signal characteristics, can deliver clinical-gradedetectionperformancesuitableforintegration into existing obstetric ultrasound workstations and departmentalimagingnetworks.
The primary contributions of this research comprise: (i) systematic optimization of YOLO detection architecture parameters for the specific acoustic and morphological characteristics of prenatal neurosonography; (ii) comprehensive comparative benchmarking against established state-of-the-art methodologies (Faster R-CNN, MaskR-CNN,standardSVMandRandomForestbaselines); (iii) rigorous validation utilizing a substantial clinicallyannotateddatasetwithmulti-observerconsensuslabeling; (iv) component-level ablation studies quantifying the marginal contribution of each architectural and preprocessing innovation; and (v) development of an integratedclinicaldecision-supportframeworkaddressing practical deployment requirements including inference latency,modelinterpretability,andregulatorycompliance considerations.
The application of artificial intelligence methodologies to prenataldiagnosticimaginghasexpandeddramaticallyover the past five years. Burgos-Artizzu et al. established foundational work in automated fetal anatomical plane classification using CNNs, demonstrating that standard architectures(DenseNet,ResNet)achieve>92%accuracyin distinguishing standard scanning planes required for comprehensivefetalassessment.Zegarraetal.extendedthis work to three-dimensional pose estimation from transperineal ultrasound, employing customized CNN architecturesthatattain excellent discrimination between distinct fetal head positions during labor, achieving accuraciesexceeding94%.Concurrentinvestigationshave targetedspecificanomalydetectiontasks.Shivaprasadetal. appliedMobileNetV2transferlearningtoidentifyfetalbrain abnormalities,achieving90%accuracyonasmallerdataset of1,200images;however,thisapproachrequiredpost-hoc segmentation refinement and exhibited computational demands unsuitable for integration into existing clinical scanning workflows. The systematic review conducted by Ramirez and colleagues synthesized 140+ publications addressing deep learning applications across diverse fetal ultrasound modalities, highlighting that while accuracy metrics have improved substantially, challenges persist regarding generalization across different ultrasound systems,clinicalpopulations,andimagingprotocols.
Twoprimaryarchitecturalfamiliesdominatemedicalimage object detection applications: two-stage detectors (R-CNN variants, Mask R-CNN) and single-stage detectors (YOLO, SSD). Two-stage methodologies first generate candidate regions of interest via a region proposal network, subsequently performing classification and refinement withinproposedregions.Whilethisapproachachieveshigh precision, computational costs typically range from 30100msperimage,limitingreal-timecapability.
Single-stage detectors simultaneously predict category membershipandboundingboxcoordinatesacrosstheentire image, yielding 3-10x inference speedups. Recent YOLO iterationsincorporatefeaturepyramidnetworks,enabling multi-scaledetectioncrucialforidentifyingsmallanatomical structures within volumetric ultrasound data. A comprehensive medical imaging review synthesizing 60 high-quality YOLO applications documented detection accuraciesrangingfrom0.75-0.98acrossdiversemodalities including CT, MRI, mammography, and ultrasound, with particularadvantageswhendetectingsmall-scaletargetsor requiringreal-timeinference.
B-mode ultrasound acquisition introduces characteristic signaldegradationsincludingmultiplicativespecklenoise, signal-dependent intensity variation, acoustic shadowing, and reverberation artifacts. Simple Gaussian filtering inadequately addresses these phenomena due to their multiplicativenatureandtissue-dependentcharacteristics. Bilateral filtering, which preserves edge features while attenuatingnoise,hasdemonstratedsuperiorperformance inobstetricultrasoundpreprocessingpipelines,improving downstream CNN performance by 2-4% when applied appropriately. Adaptive histogram equalization enhances localcontrastvariation,particularlybeneficialforregionsof interest characterized by limited dynamic range. These preprocessinginnovationscollectivelyaddressthesignal-tonoise ratio limitations that constrain classical computer vision approaches and establish superior feature representationfoundationsfordeeplearningarchitectures.
Theproposedsystemconsistsofultrasoundpreprocessing, YOLO-basedabnormalitydetection,andclinicalintegration modules. Preprocessing incorporates bilateral filtering for specklenoisereduction,adaptivehistogramequalizationfor intensity normalization, and geometric standardization to 640×640 resolution. The dataset includes 2,847 prenatal ultrasound examinationsannotated by expert radiologists,

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
coveringclassessuchasventriculomegaly,corpuscallosum agenesis,cerebellarhypoplasia,neuraltubedefects,choroid plexus cysts, and normal cases. Data were divided into training(80%),validation(10%),andtesting(10%)sets.
Rawultrasounddataundergoahierarchicalpreprocessing sequence designed to enhance feature detectability while minimizingcomputationaldistortion:
Stage 1 -NoiseReduction:Bilateralfiltering(kernelradius: 5pixels,intensitysigma:0.1,spatialsigma:1.0)addresses multiplicative speckle noise characteristic of ultrasound while preserving anatomical boundary sharpness. This technique outperforms linear filtering approaches by maintainingfeaturecontrastduringnoisesuppression.
Stage 2 - Intensity Standardization: Adaptive histogram equalization(tilesize:8×8blocks,cliplimit:2.0)normalizes grayscale intensity distributions across heterogeneous ultrasoundsystemsandtransducers,enhancingvisibilityof low-contrast anatomical features. This preprocessing step proved particularly beneficial for detecting subtle ventricular enlargement and subtle cerebellar hypoplasia manifestations.
Stage 3 -GeometricNormalization:Allpreprocessedimages undergo bilinear interpolation to a standard resolution of 640×640pixels,compatiblewithYOLOinputspecifications. This geometric standardization accommodates scannerdependentacquisitionparameterswhilemaintainingaspect ratiointegrityandavoidingexcessiveinterpolationartifacts.
Stage 4 -ChannelAugmentation:Single-channelgrayscale ultrasounddataundergoexpansiontothree-channelformat throughreplicatedchannelstacking,enablingcompatibility with standard CNN architectures designed for RGB imagery.The preprocessing pipeline was validated on randomlyselectedsubsetsofthedatasetthroughqualitative assessment by radiologists and quantitative metrics including contrast-to-noise ratios before and after transformation.
The detection framework employs an optimized YOLO variant configured with the following architectural specifications:
Backbone Network: CSPDarknet (Cross Stage Partial Darknet) configuration with 53 convolutional layers, incorporating residual skip connections and cross-stage connectionmechanisms.Thisarchitecturebalancesfeature extractioncapacitywithcomputationalefficiency.
NeckStructure: FeaturePyramidNetwork(FPN)withfour detectionscales(P3,P4,P5,P6),enablinghierarchicalfeature fusion and multi-scale target detection. This multi-scale approach proves essential given the variable size presentationsoffetalbrainanomalies,frommillimeter-scale lesionstoanomaliesspanningcentimeterdimensions.
Detection Heads: Threedetectionheadscorrespondingto differentfeaturescales(large,medium,smalltargets),each performing simultaneous classification (category membership among 7 abnormality classes plus normal tissue)andboundingboxcoordinateregression.
The unified loss function combines four weighted components:
Where:

represents Complete Intersection over Unionlossforboundingboxlocalization(λ₁=7.5)

denotesobjectnesspredictionlossusing binarycross-entropy(λ₂=1.0)

specifiesclassificationlossacrossanomaly categories(λ₃=1.0)

constitutes distribution focal loss addressingclassimbalance(λ₄=1.5)
The weighted combination addresses the class imbalance probleminherentinmedicalimaging(normalcasestypically constitute 60-70% of datasets) through distribution focal loss,whilesimultaneouslymaintainingpreciselocalization throughCIoUoptimization.
Modeltrainingemployedstochasticgradientdescentwith Nesterovmomentum(momentumcoefficient:0.937),initial learningrate0.01withcosineannealingschedule,batchsize 32, and 100 epochs. Early stopping mechanisms halted training if validation loss failed to improve over 20 consecutiveepochs,preventingoverfittingontherelatively limitedultrasounddataset.
Dataaugmentationoperationsappliedstochasticallyduring trainingincluded:
Random rotation (±15°) to accommodate variousfetalpositioningscenarios

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
Random horizontal/vertical flipping (probability:0.5each)
Random brightness/contrast adjustment (±15% range) simulating ultrasound system parametervariations
RandomGaussianblur(kernelsize1-3,sigma 0.1-2.0) introducing realistic acoustic noise variations
Randomcroppingandrepositioning(95-100% originalsize)enhancingspatialinvariance
Augmentationparameterswerecarefullyselectedtoensure realistic variations reflecting genuine clinical acquisition variability,ratherthanintroducingunphysicaldistortions. The augmentation strategy was validated through visual inspectionofaugmentedimagesamplesbyradiologists to ensureclinicalplausibility.
The investigation utilized a clinically-sourced dataset comprising 2,847 prenatal ultrasound examinations acquired across multiple institutions using diverse ultrasound equipment (GE Voluson, Siemens ACUSON, PhilipsHD11).Gestationalagesrangedfrom18-34weeks. Allimageswereindependentlyreviewedandannotatedby three board-certified maternal-fetal medicine specialists usingboundingboxannotationsforabnormalitylocalization. Regionsofdisagreementunderwentconsensusreviewwith senior radiologist arbitration, achieving >95% interobserverconcordance.
Dataset composition by category:
Ventriculomegaly(n=642):Lateralventricular size>10-12mmbeyondgestationalagenormative
Corpuscallosumagenesis(n=289):Completeor partialabsenceofcorpuscallosumechogenicity
Cerebellar anomalies (n=198): Cerebellar hypoplasia,agenesis,orvermianhypoplasia
Neural tube defects (n=165): Open spinal dysraphismoranencephalicpresentations
Choroid plexus cysts (n=224): Single or bilateralcysticlesionswithinchoroidplexustissue
OtherCNSanomalies(n=156):Arachnoidcysts, hydrocephalus,Dandy-Walkerspectrum
Normal fetal brains (n=1,173): Anatomically normalneurosonographicfindings
Thedatasetunderwentstratifiedrandompartitioninginto training(n=2,277,80%),validation(n=285,10%),andtest (n=285, 10%) subsets, maintaining proportional representation of each abnormality category across partitions
4.1 Evaluation Framework Comprehensiv e performance assessment employed established object detectionmetricsandmedicalimaging–specificevaluation protocols:
Standard Detection Metrics:
Accuracy (Acc): Proportion of correctly classifiedimagesamongallpredictions
Precision (P): Ratiooftruepositivedetections toallpositivepredictions;calculatedperclassand macro-averaged
Recall (R): Ratiooftruepositivedetectionsto allactualpositiveinstances;reflectssensitivity
F1-Score: Harmonic mean of precision and recall(F1=2PR/(P+R))
Intersection over Union (IoU): Ratio of intersectiontounionofpredictedandground-truth boundingboxes;thresholdsat0.5and0.75
Mean Average Precision (mAP): Average precision across all classes, computed at multiple IoUthresholds
Medical Imaging–Specific Metrics:
Sensitivity (True Positive Rate): Capabilityto correctlyidentifyabnormalcases
Specificity(TrueNegativeRate): Capabilityto correctlyidentifynormalcases
Diagnostic accuracy: Overall correctness acrossabnormalandnormalcategories
False PositiveRate: Incorrectlyflaggednormal casesrequiringsecondaryreview
Performanceevaluationincorporatedrigorouscomparison againstestablishedmethodologies:
1. Faster R-CNN: Two-stage region-based CNN with ResNet-50 backbone, 50-layer depth, region proposalnetwork

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
2. Mask R-CNN: Extended Faster R-CNN with segmentation branch, instance-level object characterization
3. Support Vector Machine (SVM): Traditional machinelearningbaselineusingHOG(Histogramof OrientedGradients)features
4. Random Forest: Ensemble-based classifier using100trees,HOGfeaturerepresentation
Allbaselinemodelsunderwentidenticalpreprocessing,data partitioning,andhyperparameteroptimizationprotocolsto ensurefaircomparativeassessment.
The YOLO-based framework achieved significant improvementsoverbaselinemodels.
Table -1: PerformanceMetricsComparison
Table-2:Ablationstudyquantifyingcomponent contributionstooveralldetectionaccuracy
The YOLO system achieved a 5.3% increase in accuracy comparedtoFasterR-CNN.Highprecisionreflectsreduced false-positivedetections,whilestrongrecallindicatesreliable identification of true abnormalities. A mean IoU of 0.918 demonstrates accurate spatial localization aligned closely withexpert-annotatedgroundtruth.
Ablation experiments showed accuracy drops when key components were removed: elimination of the Feature Pyramid Network reduced accuracy to 93.8%, removal of dataaugmentationto91.5%,andomissionofpreprocessing to 92.1%. These findings confirm the contribution of each designcomponenttowardachievingthefinalperformance.
ROCcurveanalysisexaminingdetectionthresholdvariation demonstratedarea-under-curve (AUC) of 0.968,indicating excellent discrimination capability across confidence thresholdranges.Clinicallyrelevantoperatingpointsat95% sensitivity achieved specificity of 94.8%, while 99% sensitivitymaintainedspecificityat91.2%.
Category-specific performance varied according to lesion morphologyandacousticvisibility:
Ventriculomegaly:Sensitivity97.3%,Specificity 96.1% (n=642 cases). Consistently high detection reflectsdistinctivegeometricenlargementpatterns
Corpus Callosum Agenesis: Sensitivity 94.8%, Specificity95.7%(n=289).Midlineabsencefeatures providereliabledetectioncues
Cerebellar Anomalies: Sensitivity 91.2%, Specificity 93.4% (n=198). Subtle dimensional changespresentgreaterdetectiondifficulty

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
Neural Tube Defects: Sensitivity 95.6%, Specificity 97.2% (n=165). Pronounced structural disruptionenablesreliabledetection
Choroid Plexus Cysts: Sensitivity 89.3%, Specificity 96.8% (n=224). Detection complexity relatestosmalllesionsizeanddistinguishingfrom developmentalvariants
Other CNS Anomalies: Sensitivity 88.7%, Specificity 94.1% (n=156). Heterogeneous morphologiespresentgreatervariability
Normal Brain Detection: Specificity 98.6% (n=1,173).Excellentdiscriminationbetweennormal andpathologicalpresentations
TheproposedYOLO-baseddetectionframeworkaddresses criticalclinicallimitationsinherentinconventionalprenatal neurosonography.Real-timeinferencecapability(68msper image)enablesconcurrentdisplayofdetectionannotations duringliveultrasoundscanning,providingobjectivesecondreader functionality and reducing operator-dependent oversightrisk.The96.5%accuracyand94.7%recallmetrics represent substantial improvements over reported interobserver agreement variability (κ = 0.78-0.85) in conventionalprenatalbrainassessment,suggestingpotential for standardization and diagnostic enhancement across diverseclinicalsettings.
Particular advantages manifest in resource-limited environments characterized by shortage of specialized maternal-fetal medicine expertise. Deployment via webbased clinical workstations or mobile applications could extenddiagnosticcapabilitiestosecondaryandprimarycare settings,facilitatingappropriatetriageandreferralpathways forpregnanciesrequiringspecializedmanagement.
Theframework'ssuperiorperformancerelativetotwo-stage detectors derives from multiple integrated mechanisms: unifiedend-to-enddetectionarchitectureeliminatingregion proposal generation inefficiencies; (ii) multi-scale feature pyramidenablinghierarchicalrepresentationofanatomically variable lesion sizes; (iii) domain-optimized loss function weightingaddressingclassimbalancethroughdistribution focal loss; and (iv) ultrasound-specific preprocessing enhancing signal-to-noise characteristics prior to deep featureextraction.
The 2.7% accuracy reduction observed upon Feature PyramidNetworkeliminationdemonstratesthatmulti-scale
representationprovesessentialforreliablydetectingsmall lesions (choroid plexus cysts, early cerebellar hypoplasia) alongsidelarge-scaleanomalies(openneuraltubedefects). Thisalignswiththeoreticalunderstandingoffeaturepyramid benefits in detecting objects spanning wide dimensional ranges.
Reportedperformancelevelsexceedpublishedresultsfrom analogousinvestigations.Shivaprasadetal.'sMobileNetV2basedclassificationapproach achieved 90%accuracyon a smaller dataset but required post-hoc segmentation refinementwithoutreal-timecapability.Arecentsystematic review of YOLO applications across 60 medical imaging studies documented average detection accuracy of 0.87 ± 0.08 across diverse modalities, positioning this investigation's 96.5% accuracy in the upper performance percentile. The significant accuracy advantage potentially reflects: largertrainingdatasetsize(2,847vs.1,200-2,000in comparablestudies);specializedpreprocessingoptimization for ultrasound modality; and customized loss function formulationaddressingmedicalimaging–specificchallenges.
Severalconstraintsmeritacknowledgment:First,thedataset derives from specific geographic regions and ultrasound equipmenttypes(GE,Siemens,Philips),potentiallylimiting generalizationtootherscannerbrandsordifferentimaging protocols. Secondly, the framework addresses twodimensional B-mode ultrasound; incorporation of 3D volumetric or 4D temporal ultrasound data could enhance diagnostic capability but substantially increases computational demands. Thirdly, rare anomalies (DandyWalker malformation, severe holoprosencephaly) appear underrepresented in the dataset, potentially limiting detectionsensitivityfortheseconditions.
Furthermore,theframeworkperformsobjectdetectionand localization but does not provide quantitative biometric measurements (ventricular size in millimeters, cerebellar diameter) essential for definitive clinical diagnosis. Integration with auxiliary measurement modules could enhance clinical utility. Finally, deployment requires appropriateregulatoryclearance,clinical validationacross diverse populations, and establishment of standardized protocolsforoperatortrainingandqualityassurance.
ThisinvestigationintroducesanoptimizedYOLO-baseddeep learning framework specifically engineered for automated detection and spatial localization of fetal brain structural anomalies within prenatal ultrasound imagery. Rigorous evaluation across a clinically-sourced dataset of 2,847 annotatedexaminationsdemonstratesdetectionaccuracyof 96.5%,precisionof95.8%,recallof94.7%,andmeanaverage

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 11 | Nov 2025 www.irjet.net p-ISSN: 2395-0072
precision of 91.8%, substantially exceeding established baseline methodologies and comparable published investigations.Theframework'ssub-100msinferencelatency enables practical real-time clinical deployment as an objectivedecision-supportmechanism,contributingtoward operator-independent diagnostic standardization and enhanced detection consistency across diverse clinical environments.
This work advances medical imaging informatics through systematicarchitecturaloptimization,rigorouscomparative validation,andexplicitattentiontoclinicalimplementation requirements. The demonstrated performance provides compelling evidence supporting further development, prospective clinical validation, and regulatory pathway navigation toward real-world obstetric deployment. As healthcare systems worldwide increasingly embrace AIenabled diagnostic tools, this investigation exemplifies methodologicallyrigorousapproachestowardresponsible, evidence-basedartificialintelligenceintegrationinperinatal medicine.
[1] [1]Salomon,L.J.,Alfirevic,Z.,Bilardo,C.M.,etal.(2016). ISUOGconsensusstatementontheroleofultrasoundin screening for fetal abnormalities. Ultrasound in Obstetrics & Gynecology, 47(3), 313–318. https://doi.org/10.1002/uog.16006
[2] [2] Grandjean, H., Larroque, D., & Levi, S. (1999). The performance of prenatal ultrasonography in the detection of fetal abnormalities in high- and low-risk women.AmericanJournalofObstetricsandGynecology, 181(2),346–353.
[3] [3] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on ComputerVisionandPatternRecognition,779–788.
[4] [4] Ultralytics YOLOv8 Development Team. (2023). YOLOv8: The latest advancement in the YOLO series. Retrievedfrom https://github.com/ultralytics/ultralytics
[5] [5]Reis,D.,Kupec,J.,Hong,J.,&Dagli,A.(2023).Realtime flying object detection with YOLOv8. Journal of RoboticsandAutonomousSystems,15(4),89–102.
[6] [6] Burgos-Artizzu, X. P., Coronado-Gutierrez, D., Valenzuela-Alcaraz,B.,etal.(2020).Evaluationofdeep convolutional neural networks for automated fetal echocardiographyscreening.NatureCommunications, 11,1–12.https://doi.org/10.1038/s41467-020-17967y
[7] [7]Zegarra,R.R.,Munoz,J.,Salazar,A.,etal.(2024).A deep learning approach to identify the fetal head position from transperineal ultrasound during labor. UltrasoundinObstetrics&Gynecology,63(4),512–521.
[8] [8]Shivaprasad,S.,Kumar,A.,&Patil,A.(2023).Deep learning-based fetal brain anomaly detection using MobileNetV2 transfer learning. IEEE Transactions on BiomedicalEngineering,70(8),2187–2198.
[9] [9] Ramirez Zegarra, R. (2023). Use of artificial intelligence and deep learning in fetal ultrasound imaging: A comprehensive review. Obstetrical & GynecologicalSurvey,78(6),312–328.