Open Access
Issue
MATEC Web Conf.
Volume 417, 2025
2025 RAPDASA-RobMech-PRASA-AMI Conference: Bridging the Gap between Industry & Academia - The 26th Annual International RAPDASA Conference, joined by RobMech, PRASA and AMI, co-hosted by CSIR and Tshwane University of Technology, Pretoria
Article Number 04003
Number of page(s) 14
Section Robotics and Mechatronics
DOI https://doi.org/10.1051/matecconf/202541704003
Published online 25 November 2025
  1. L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, H. Zhao, Depth anything V2, arXiv preprint arXiv:2406.09414v2, (2024). [Google Scholar]
  2. J. Spencer, F. Tosi, M. Poggi, R. S. Arora, C. Russell, S. Hadfield, R. Bowden, G. Zhou, Z. Li, Q. Rao, Y. Bao, The third monocular depth estimation challenge, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 1-14 (2024). [Google Scholar]
  3. P. Ramirez, F. Tosi., L. Di Stefano, R. Timofte, A. Costanzino, M. Poggi, J. Hwang, NTIRE 2024 challenge on HR depth from images of specular and transparent surfaces. Proc. IEEE Conf. Comput. Vis. Pattern Recognit., (pp. 6499-6512) (2024). [Google Scholar]
  4. J.M. Louw, J. Verster, J. Dickens, Assessing Depth Anything V2 monocular depth estimation as a LiDAR alternative in robotics, in Proc. 2025 RAPDASA-RobMech-PRASA-AMI Conf., (under review). [Google Scholar]
  5. U. Rajapaksha, F. Sohel, H. Laga, D. Diepeveen, M. Bennamoun, Deep learning-based depth estimation methods from monocular image and videos: A comprehensive survey, ACM Comput. Surv. 56, 1-51, (2024). [CrossRef] [Google Scholar]
  6. J. Spencer, C. Russell, S. Hadfield, R. Bowden. Deconstructing self-supervised monocular reconstruction: The design decisions that matter. arXiv preprint arXiv:2208.01489. (2022) [Google Scholar]
  7. K. Purdon, J. Dickens, W. de Ronde, K. Ramruthan, G. Crafford, Voyager, a ground mobile robotic platform for research development, in Proc. 2023 RAPDASA-RobMech-PRASA-AMI Conf., (2023). [Google Scholar]
  8. Ouster, OS0: Ultra-Wide View High-Resolution Imaging LiDAR Datasheet, REV: 12/2024, (2024), Available: https://data.ouster.io/downloads/datasheets/datasheet-rev7-v3p1-os0.pdf [Accessed: 29 January 2025]. [Google Scholar]
  9. S. Macenski, T. Foote, B. Gerkey, C. Lalancette, W. Woodall, Robot Operating System 2: Design, architecture, and uses in the wild, Sci. Robot. 7 (2022). [Google Scholar]
  10. S. Macenski, F. Martin, R. White and J. G. Clavero, “The Marathon 2: A Navigation System,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, 2020. [Google Scholar]
  11. S. Macenski S, Tsai D, Feinberg M. Spatio-temporal voxel layer: A view on robot perception for the dynamic world. International Journal of Advanced Robotic Systems. 2020;17(2). doi:10.1177/1729881420910530 [Google Scholar]
  12. Logitech, C920 HD Pro Webcam, https://www.logitech.com/en-za/products/webcams/c920-pro-hd-webcam.960-001055.html [Accessed: 7 April 2025] (n.d.) [Google Scholar]
  13. ELP, ELP 2Megapixel 1920×1080 IR Webcam, https://www.svpro.cc/product/svpro-2megapixel-19201080-ir-webcam-1-2-7-cmos-ov2710-mjpeg-30fps-60fps-120fps-ir-cut-filter-day-and-night-usb-camera/ [Accessed: 7 April 2025] (n.d.) [Google Scholar]
  14. G. Bradski, The OpenCV Library, Dr. Dobb’s J. Softw. Tools (2000). [Google Scholar]
  15. L. Yang, B. Kang, Z. Huang, X. Xu, J. Feng, H. Zhao, Depth anything: Unleashing the power of large-scale unlabeled data, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, pp. 10371-10381, (2024) [Google Scholar]
  16. M.A. Fischler, R.C. Bolles, Commun. of the ACM 24(6), 381-395, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, (1981) [Google Scholar]
  17. R.B. Rusu, S. Cousins, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 3D is here: Point Cloud Library (PCL), (2011) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.