Solving the Cornell University's Virtual Reality Research Contest utilizing our site's artificial intelligence animation capabilities
Cornell University's Virtual Embodiment (Virtual Reality) lab, led by Professor Andrea Won, is making strides in the development of robot buddies for scuba divers. The research project, a collaboration between the lab and the Lab for Integrated Sensor Control (LISC), aims to present at the IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN 2024).
The team, which includes Sushrut Surve, Jia Guo, Jovan Menezes, Connor Tate, Yiting Jin, Justin Walker, Silvia Ferrari, and Andrea Stevenson Won, has been utilising the AI Video to Animation tool from a renowned website AI for their research. This tool has proven to be 7x faster to set up than alternative software tools and boasts a lower cost, with a 64% reduction compared to motion capture tools.
The AI Video to Animation tool has enabled the team to convert any video into animation, thereby expanding their data repository. The animations can be visualised from multiple angles using the video editor feature, which has been instrumental in the project's progress.
The website AI also allows for the export of motion files for further analysis in other software tools. The FBX file export feature, in particular, has been indispensable, as it has enabled the team to take their animations into Unity software for further manipulation.
A script created by Jiahao Liu was used to connect the rigs from motion capture and the FBX files from the website AI. This connection has allowed the team to accurately measure motion metrics such as position, rotation, and orientation. Furthermore, they have been able to calculate various motion metrics including speed, depth, force of movement in water, distance, etc., using the website AI.
The project involves the creation of a hydrodynamic motion model to help with the development of robot buddies for scuba divers. By leveraging the AI's capabilities, the team hopes to generate simulated data for many different divers in the future. This would make the training data for a robot "diving buddy" more diverse, accurate, and robust.
While there is no direct confirmation of how the lab has specifically enhanced their underwater motion data collection for robotic assistants for scuba divers using the website AI, it is plausible that they have utilised AI-powered data annotation and processing tools, virtual reality simulations, distributed estimation and control algorithms, and AI-based data handling to create a more robust and capable system.
This collaboration between Cornell University's Virtual Reality Lab and the website AI marks a significant step forward in the development of assistive technologies for scuba divers, promising a safer and more efficient underwater experience for future explorers.
[1] D. A. Paley, et al., "Distributed Estimation and Control for Underwater Robots," IEEE Transactions on Robotics, vol. 36, no. 4, pp. 875-892, 2020. [4] S. R. Lin, et al., "Domain-Specific AI for Underwater Robotics," IEEE Robotics and Automation Magazine, vol. 28, no. 1, pp. 34-43, 2021.
- Sushrut Surve, Jia Guo, Jovan Menezes, Connor Tate, Yiting Jin, Justin Walker, Silvia Ferrari, Andrea Stevenson Won, and other team members at Cornell University's Virtual Embodiment lab are using AI Video to Animation tool from a well-known website for their research on robot buddies for scuba divers.
- The AI Video to Animation tool has been key in helping the team convert videos into animations, expanding their data repository.
- The animations produced can be visualized from multiple angles using the video editor feature, significantly aiding the project's progress.
- The website AI also offers the export of motion files for further analysis in other software tools, with the FBX file export feature being particularly valuable for taking animations into Unity software for additional manipulation.
- Jiahao Liu created a script to link the rigs from motion capture and the FBX files from the website AI, enabling accurate measurement of motion metrics like position, rotation, and orientation.
- The project's goal is to create a hydrodynamic motion model to assist in the development of robot buddies for scuba divers, with the team hoping to use the AI's capabilities to generate diverse, accurate, and robust training data for these robots in the future.
- It's speculated that the lab may be utilizing AI-powered data annotation and processing tools, virtual reality simulations, distributed estimation and control algorithms, and AI-based data handling to improve the underwater motion data collection for robotic assistants for scuba divers.
- The collaboration between Cornell University's Virtual Reality Lab and the website AI represents a significant advancement in the field of assistive technologies for scuba divers, potentially leading to safer and more efficient underwater experiences for future explorers.
- In the scientific community, papers such as "Distributed Estimation and Control for Underwater Robots" and "Domain-Specific AI for Underwater Robotics" demonstrate the potential of technology and artificial intelligence in the development of underwater robotics for various applications, including health and wellness, fitness and exercise, medical conditions, technology, and marketing.