Open Access
| Issue |
MATEC Web Conf.
Volume 417, 2025
2025 RAPDASA-RobMech-PRASA-AMI Conference: Bridging the Gap between Industry & Academia - The 26th Annual International RAPDASA Conference, joined by RobMech, PRASA and AMI, co-hosted by CSIR and Tshwane University of Technology, Pretoria
|
|
|---|---|---|
| Article Number | 04002 | |
| Number of page(s) | 17 | |
| Section | Robotics and Mechatronics | |
| DOI | https://doi.org/10.1051/matecconf/202541704002 | |
| Published online | 25 November 2025 | |
- U. Rajapaksha, F. Sohel, H. Laga, D. Diepeveen, M. Bennamoun, Deep learning-based depth estimation methods from monocular image and videos: A comprehensive survey, ACM Comput. Surv. 56, 1-51 (2024). [CrossRef] [Google Scholar]
- Y. Ming, X. Meng, C. Fan, H. Yu, Deep learning for monocular depth estimation: A review, Neurocomputing 438, 14-33 (2021). [Google Scholar]
- L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, H. Zhao, Depth anything V2, arXiv preprint arXiv:2406.09414v2, (2024). [Google Scholar]
- B. Alsadik, S. Karam, The simultaneous localization and mapping (SLAM)-An overview, J. Appl. Sci. Technol. Trends 2, 147-158 (2021). [Google Scholar]
- A. Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The kitti dataset, Int. J. Robot. Res. 32, 1231-1237 (2013). [CrossRef] [Google Scholar]
- M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, The cityscapes dataset for semantic urban scene understanding, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 3213-3223 (2016). [Google Scholar]
- N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from RGBD images, in Comput. Vis. ECCV 2012, Lecture Notes in Computer Science, vol. 7576, Springer, Berlin, Heidelberg, 746-760 (2012). [Google Scholar]
- D. J. Butler, J. Wulff, G. B. Stanley, M. J. Black, A naturalistic open source movie for optical flow evaluation, in Comput. Vis. ECCV 2012, Lecture Notes in Computer Science, vol. 7577, Springer, Berlin, Heidelberg, 611-625 (2012). [Google Scholar]
- T. Schops, J. L. Schonberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, A. Geiger, A multi-view stereo benchmark with high-resolution images and multi-camera videos, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 3260-3269 (2017). [Google Scholar]
- I. Vasiljevic, N. Kolkin, S. Zhang, R. Luo, H. Wang, F. Z. Dai, A. F. Daniele, M. Mostajabi, S. Basart, M. R. Walter, G. Shakhnarovich, DIODE: A dense indoor and outdoor depth dataset, arXiv preprint arXiv:1908.00463 (2019). [Google Scholar]
- D. Barnes, M. Gadd, P. Murcutt, P. Newman, I. Posner, The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset, Proc. IEEE Int. Conf. Robot. Autom., 6433-6438 (2020). [Google Scholar]
- R. Birkl, D. Wofk, M. Müller, Midas v3.1--a model zoo for robust monocular relative depth estimation, arXiv preprint arXiv:2307.14460 (2023). [Google Scholar]
- L. Yang, B. Kang, Z. Huang, X. Xu, J. Feng, H. Zhao, Depth anything: Unleashing the power of large-scale unlabeled data, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, pp. 10371-10381, (2024) [Google Scholar]
- W. Yin, C. Zhang, H. Chen, Z. Cai, G. Yu, K. Wang, X. Chen, C. Shen, Metric3d: Towards zero-shot metric 3d prediction from a single image, Proc. IEEE/CVF ICCV, 9043-9053 (2023) [Google Scholar]
- V. Guizilini, I. Vasiljevic, D. Chen, R. Ambruș, A. Gaidon, Towards zero-shot scale-aware monocular depth estimation, Proc. IEEE Int. Conf. Comput. Vis., 9233-9243 (2023). [Google Scholar]
- S. Bhat, R. Birkl, D. Wofk, P. Wonka, M. Müller, ZoeDepth: Zero-shot transfer by combining relative and metric depth, arXiv preprint arXiv:2302.12288, (2023). [Google Scholar]
- J. Spencer, F. Tosi, M. Poggi, R. S. Arora, C. Russell, S. Hadfield, R. Bowden, G. Zhou, Z. Li, Q. Rao, Y. Bao, The third monocular depth estimation challenge, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 1-14 (2024). [Google Scholar]
- A. García, M. Díaz, F. Martínez, From concept to application: building and testing a low-cost light detection and ranging system for small mobile robots using time-of-flight sensors, Int. J. Electr. Comput. Eng. 15, 1 (2025). [Google Scholar]
- P. Venkatesan, N. Pavitra, R. Mohan, Performing SLAM using Low-Cost Sensors for Autonomous Navigation in household environments, Proc. IEEE Int. Conf. Converg. Technol., 1-5 (2019). [Google Scholar]
- I. C. Condotta, T. M. Brown-Brandl, S. K. Pitla, J. P. Stinn, K. O. Silva-Miranda, Evaluation of low-cost depth cameras for agricultural applications, Comput. Electron. Agric. 173, 105394 (2020). [Google Scholar]
- M. Kytö, M. Nuutinen, P. Oittinen, Method for measuring stereo camera depth accuracy based on stereoscopic vision, Proc. SPIE 7864, 168-176 (2011). [Google Scholar]
- Y. You, C. P. Phoo, C. A. Diaz-Ruiz, K. Z. Luo, W. L. Chao, M. Campbell, B. Hariharan, K. Q. Weinberger, Better Monocular 3D Detectors with LiDAR from the Past, arXiv preprint arXiv:2404.05139 (2024). [Google Scholar]
- K. Purdon, J. Dickens, W. de Ronde, K. Ramruthan, G. Crafford, Voyager, a ground mobile robotic platform for research development, in Proc. 2023 RAPDASA-RobMech-PRASA-AMI Conf., (2023). [Google Scholar]
- Ouster, OS0: Ultra-Wide View High-Resolution Imaging Lidar Datasheet, REV: 12/2024, (2024), Available: https://data.ouster.io/downloads/datasheets/datasheet-rev7-v3p1-os0.pdf [Accessed: 29 January 2025]. [Google Scholar]
- S. Macenski, T. Foote, B. Gerkey, C. Lalancette, W. Woodall, Robot Operating System 2: Design, architecture, and uses in the wild, Sci. Robot. 7 (2022). [Google Scholar]
- Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330-1334 (2000). [CrossRef] [Google Scholar]
- J. Heikkilä, O. Silvén, A four-step camera calibration procedure with implicit image correction, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1106-1112, (1997) [Google Scholar]
- G. Bradski, The OpenCV Library, Dr. Dobb’s J. Softw. Tools (2000). [Google Scholar]
- K. Koide, S. Oishi, M. Yokozuka, A. Banno, General, single-shot, target-less, and automatic LiDAR-camera extrinsic calibration toolbox, in Proc. IEEE Int. Conf. Robotics Autom. (ICRA), pp. 11301-11307, (2023) [Google Scholar]
- N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from RGBD images, in Computer Vision–ECCV 2012, A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, C. Schmid, Eds., Lecture Notes in Computer Science, vol. 7576, Springer, Berlin, Heidelberg, pp. 746-760, (2012) [Google Scholar]
- X. Dong, M. Garratt, H. Abbass, Towards real-time monocular depth estimation for robotics: A survey, IEEE Trans. Intell. Transp. Syst. 23, 1-12 (2021). [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

