4 CONCLUSION
1) To solve the problem of difficult detection of the corner points of
the crane’s grab boom during the alignment process between the boom and
the segmental beam body lifting hole. This article proposes a method for
constructing grayscale difference maps, which presents a bimodal feature
between the region to be segmented and the background structure, and it
more suitable for binary processing using the Otsu algorithm. Accurately
extract the edge image of the crane grabbing boom under sufficient
lighting conditions and without obstructions. The image preprocessing
method in this article solves the problem of over segmentation and under
segmentation that other image segmentation methods are prone to, and can
meet the segmentation requirements under different lighting conditions.
The method proposed in this article has the advantage of fast
computation speed, and compared to semantic segmentation, it does not
require a large amount of time to produce datasets to train neural
networks.
2) The threshold setting of Hough transform requires artificial
selection. Due to the changing weather, lighting conditions,
construction environment and other external conditions, it is necessary
to ensure the reliability of application and engineering application.
Combining the Hough transform with k-means clustering, the Hough line
detection threshold is set low, simplifying lines with the same features
into one, providing a new approach for corner detection.
3) The method proposed in this article for determining the optimal
adaptive threshold by counting the number of votes removes the error
lines in the low threshold of the Hough transform and ensures that there
is no missed detection; Replace the original cluster centroid with an
improved clustering centroid calculation method as the basis for line
fitting, Improved the accuracy of line fitting under the same lighting
conditions and its robustness under different uniform lighting
conditions. The algorithm in this article has the best detection
performance under strong complementary light conditions. The average
detection error within 0-2 pixels accounts for 97.1% and a recognition
accuracy of 98.6%. The recognition success rate under different
lighting conditions is higher than 92.9%. This method is significantly
superior to traditional linear detection methods and meets the needs of
automatic gripping of the boom.