Help ?

IGMIN: We're glad you're here. Please click 'create a new query' if you are a new visitor to our website and need further information from us.

If you are already a member of our network and need to keep track of any developments regarding a question you have already submitted, click 'take me to my Query.'

Focusing on Biology, Medicine and Engineering disciplines | ISSN: 2995-8067  G o o g l e  Scholar

logo image

IgMin Research - A BioMed & Engineering Open Access Journal is a prestigious multidisciplinary journal committed to the advancement of research and knowledge in the expansive domains of biology, medicine and engineering.

Abstract

Abstract at IgMin Research

We strive to catalyze dialogues that encourage collaborative progress and innovation.

Engineering Group Case Report Article ID: igmin167

System for Detecting Moving Objects Using 3D Li-DAR Technology

Robotics Image Processing Affiliation

Affiliation

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    Department of Pharmacy, World University of Bangladesh, Bangladesh

    Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

Abstract

The “System for Detecting Moving Objects Using 3D LiDAR Technology” introduces a groundbreaking method for precisely identifying and tracking dynamic entities within a given environment. By harnessing the capabilities of advanced 3D LiDAR (Light Detection and Ranging) technology, the system constructs intricate three-dimensional maps of the surroundings using laser beams. Through continuous analysis of the evolving data, the system adeptly discerns and monitors the movement of objects within its designated area. What sets this system apart is its innovative integration of a multi-sensor approach, combining LiDAR data with inputs from various other sensor modalities. This fusion of data not only enhances accuracy but also significantly boosts the system’s adaptability, ensuring robust performance even in challenging environmental conditions characterized by low visibility or erratic movement patterns. This pioneering approach fundamentally improves the precision and reliability of moving object detection, thereby offering a valuable solution for a diverse array of applications, including autonomous vehicles, surveillance, and robotics.

Figures

References

    1. Feng Z, Jing L, Yin P, Tian Y, Li B. Advancing self-supervised monocular depth learning with sparse LiDAR. In Conference on Robot Learning. 2022; 685-694. https://doi.org/10.48550/arXiv.2109.09628
    2. Sinan H, Fabio R, Tim K, Andreas R, Werner H. Raindrops on the windshield: Performance assessment of camera-based object detection. In IEEE International Conference on Vehicular Electronics and Safety. 1-7; 2019. https://doi.org/10.1109/ICVES.2019.8906344
    3. Ponn T, Kröger T, Diermeyer F. Unraveling the complexities: Addressing challenging scenarios in camera-based object detection for automated vehicles. Sensors. 2020; 20(13): 3699. https://doi.org/10.3390/s20133699
    4. Fu XB, Yue SL, Pan DY. Scoring on the court: A novel approach to camera-based basketball scoring detection using convolutional neural networks. International Journal of Automation and Computing. 2021; 18(2):266–276. https://doi.org/10.1007/s11633-020-1259-7
    5. Lee J, Hwang KI. Adaptive frame control for real-time object detection: YOLO's evolution. Multimedia Tools and Applications. 2022; 81(25): 36375–36396. https://doi.org/10.1007/s11042-021-11480-0
    6. Meyer GP, Laddha A, Kee E, Vallespi-Gonzalez C, Wellington CK. Lasernet: Efficient probabilistic 3D object detection for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition. 2019; 12677–12686. https://doi.org/10.1109/CVPR.2019.01296
    7. Shi S, Wang X, Li H. Pointrcnn: 3D object proposal generation and detection from point cloud. In IEEE Conference on Computer Vision and Pattern Recognition. 2019; 770–779. https://doi.org/10.1109/CVPR.2019.00086
    8. Ye M, Xu S, Cao T. Hvnet: Hybrid voxel network for LiDAR-based 3D object detection. In IEEE Conference on Computer Vision and Pattern Recognition. 2020; 1631–1640. https://doi.org/10.1109/CVPR42600.2020.00170
    9. Ye Y, Chen H, Zhang C, Hao X, Zhang Z. Sarpnet: Shape attention regional proposal network for LiDAR-based 3D object detection. Neurocomputing. 2020; 379: 53-63. https://doi.org/10.1016/j.neucom.2019.09.086
    10. Fan L, Xiong X, Wang F, Wang N, Zhang Z. Rangedet: In defense of range view for LiDAR-based 3D object detection. In IEEE International Conference on Computer Vision. 2021; 2918–2927. https://doi.org/10.1109/ICCV48922.2021.00291
    11. Hu H, Cai Q, Wang D, Lin J, Sun M, Krahenbuhl P, Darrell T, Yu F. Joint monocular 3D vehicle detection and tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019; 5390–5399.
    12. Kuhn H. The Hungarian method for the assignment problem. Naval Research Logistics (NRL). 2005; 52(1):7–21.
    13. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012; 3354–3361.
    14. Wang S, Sun Y, Liu C, Liu M. Pointtracknet: An end-to-end network for 3D object detection and tracking from point clouds. IEEE Robotics and Automation Letters. 2020; 5:3206–3212.
    15. Bernardin K, Stiefelhagen R. Evaluating multiple object tracking performance: The CLEAR MOT metrics. EURASIP Journal on Image and Video Processing. 2008; 246309.
    16. Luiten J, Os Ep AA, Dendorfer P, Torr P, Geiger A, Leal-Taixé L, Leibe B. HOTA: A Higher Order Metric for Evaluating Multi-object Tracking. Int J Comput Vis. 2021;129(2):548-578. doi: 10.1007/s11263-020-01375-2. Epub 2020 Oct 8. PMID: 33642696; PMCID: PMC7881978.
    17. Wang S, Cai P, Wang L, Liu M. DITNET: End-to-end 3D object detection and track ID assignment in spatio-temporal world. IEEE Robotics and Automation Letters. 2021; 6:3397–3404.
    18. Luiten J, Fischer T, Leibe B. Track to reconstruct and reconstruct to track. IEEE Robotics and Automation Letters. 2020; 5:1803–1810.
    19. Khader M, Cherian S. An Introduction to Automotive LIDAR. Texas Instruments. 2020; https://www.ti.com/lit/wp/slyy150a/slyy150a.pdf
    20. Tian Z, Chu X, Wang X, Wei X, Shen C. Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images. 2022. arXiv preprint arXiv:2205.13764.
    21. Chen X, Ma H, Wan J, Li B, Xia T. Multi-view 3D object detection network for autonomous riving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; 1907–1915.
    22. Ku J, Mozifian M, Lee J, Harakeh A, Waslander SL. Joint 3D proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; 1–8.
    23. Zhou Y, Tuzel O. Voxelnet: End-to-end learning for point cloud-based 3D object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018; 4490–4499.
    24. Qi CR, Liu W, Wu C, Su H, Guibas LJ. Frustum PointNets for 3D object detection from RGB-D data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018; 918–927.
    25. Cao P, Chen H, Zhang Y, Wang G. Multi-view frustum PointNet for object detection in autonomous driving. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP). 2019; 3896–3899.
    26. Qi CR, Su H, Mo K, Guibas LJ. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; 652–660.
    27. Qi CR, Yi L, Su H, Guibas LJ. PointNet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413. 2017.
    28. Shi S, Wang X, Li H. Pointrcnn: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019; 770–779.
    29. Sun X, Wang S, Wang M, Cheng SS, Liu M. An advanced LiDAR point cloud sequence coding scheme for autonomous driving. In Proceedings of the 28th ACM International Conference on Multimedia. 2020; 2793–2801.
    30. Sun X, Wang M, Du J, Sun Y, Cheng SS. A Task-Driven Scene-Aware LiDAR Point Cloud Coding amework for Autonomous Vehicles. IEEE Transactions on Industrial Informatics. 1–11. 2022. https://doi.org/10.1109/TII.2022.3168235

Similar Articles

On how Doping with Atoms of Gadolinium and Scandium affects the Surface Structure of Silicon
Egamberdiev BE, Daliev Kh S, Khamidjonov I Kh, Norkulov Sh B and Erugliev UK
DOI10.61927/igmin206
Association and New Therapy Perspectives in Post-Stroke Aphasia with Hand Motor Dysfunction
Shuo Xu, Chengfang Liang, Shaofan Chen, Zhiming Huang and Haoqing Jiang
DOI10.61927/igmin141

Social Icons

PUBLISH YOUR RESEARCH

We publish a wide range of article types in biology, medicine and engineering with no editorial biases.

Submit

See Manuscript Guidelines and APC

Explore the IgMin Subjects
Google Scholar
welcome Image

Google Scholar, beta-launched in November 2004, acts as an academic navigator through vast scholarly seas. It covers peer-reviewed journals, books, conference papers, theses, dissertations, preprints, abstracts, technical reports, court opinions, and patents. Search IgMin Articles