In complex underwater environments, it is difficult for traditional methods to accurately obtain position information of dense, fuzzy, and small-sized organisms. Although the convolutional neural network algorithm is popular, it is limited by insufficient training samples and other limitations, and the accuracy and speed improvement are poor. For this reason, this paper designs a YOLOv7-CBF network model based on the YOLOv7 network. By introducing the CBIF module and FasterNet module, the model fuses local and global information, improves the feature extraction capability, and reduces redundant computation to effectively extract contextual semantic information. Meanwhile, a new enhanced loss function ECLOU is proposed to improve the localisation accuracy and model robustness. Experiments prove that the model performs well in underwater seafood detection with high accuracy and speed, which meets the practical needs. This result is of great significance for facilitating seafood fishing, reducing cost and improving detection efficiency.
In order to improve the segmentation accuracy of 3D point cloud model in feature ambiguous region, the unsupervised clustering algorithm based on surface fusion features combines the depth residuals with the normal deviation angle to form the fusion features with good descriptive properties and significant differentiation. Adopting the point cloud local density median proximity points as the candidate points for point cloud region growth, using point cloud cluster features instead of individual point features, for the problem of feature ambiguity in the edge region, the unattributed points are reclustered, so as to improve the cluster segmentation in the way of extended growth; experiments are conducted on the selected model dataset, and the results show that the method can achieve the reasonable segmentation of the complex point cloud model, and the under-segmentation situation has a obvious improvement on the under-segmentation situation.
This paper addresses the issue of segmentation errors in video extensometer caused by deformation, dispersion, and cracks in markers during the stretching process. A method combining frame -to-frame matching with deep learning is proposed to address the high robustness segmentation problem of black and white markers on materials undergoing large strains. The selection of template position during image matching and the update of the template throughout the stretching process are discussed. Experiments and analysis were conducted on various types of rubber and plastic specimens that exhibit significant strain and irregular deformation. The results demonstrate that this method can be applied to line marker-type video extensometers, enhancing the overall robustness of the measurement algorithm.
In the production of power cables, the performance test of the cable insulation sheath is an important part. Compared with traditional testing methods, machine vision has the advantages of stable operation, high precision, and high efficiency. Because of this situation, firstly, based on machine vision theory, the structure of the old-fashioned tensile machine was reconstructed, and the whole tensile test process of the cable insulation sheath test was imaged by a CMOS camera, and the color recognition algorithm, effective area segmentation algorithm, and workpiece were proposed. The fracture judgment detection algorithm and the corrosion difference algorithm are used to calculate the distance between the marked lines and then calculate the elongation at the break of the cable material. Through systematic experiments on the same batch of cable jackets, the deviation of the elongation at break measured by visual inspection is the largest, no more than 1%. The experimental results and practical applications show that the machine vision-based visual inspection system has higher accuracy, faster efficiency, and more stable and reliable operation than the traditional inspection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.