We would like to thank our philanthropic sponsor and contributor, Dr. Yubin Xi, for financially giving us the opportunity to advance our practical skills through this real life engineering design project.
The objective of this project is to provide a user interface for participants in a 3D scan environment, in order to improve the quality of the scan and reduce the amount of instructions given by the scan technician. This type of system will be specific towards illuminating the ideal location for a head/hand to be scanned in some fashion (Augmented Reality, Virtual Reality, hologram). After this position has been filled by the user being scanned, the user will be notified of completion.
From the toothbrush you hold in the morning to the seat in your car, ergonomic products make your life more comfortable. Since consumers dictate the market, 3D scanning has recently grown in popularity. This has caused increased demand for more consumer-friendly and ergonomic products. The team's sponsor works as a professional ergonomic engineer. He uses scanners to take 3D pictures of various body parts. This allows him to design ergonomic-oriented products such as cell phones, computer mice, and game controllers. Current scanners have a limited field of view. Increasing this field of view results in poor quality scans. Therefore, scan technicians must verbally navigate participants into the needed position and orientation. This causes participants to take 30 minutes or longer to find the correct scan position and orientation. Team 523's design strives to shorten these scan times. Shortening scan times will increase productivity and save money.
Team 523 has designed a mixed reality wearable. This wearable will track and display position and orientation of a user's hand. The AprilTag, similar to a QR code, will clip onto a bracelet. The bracelet uses a quick remove fastener to reduce motion blur. A 3D camera will track and compile position and orientation data from the wearable's AprilTag. The computer finds the AprilTag and display it as a 3D model on a nearby screen. Most likely a hand or head, the 3D model will allow the participant to match their position to the needed position shown on the monitor. Last, this allows participants to self-correct their position and orientation without any extra verbal direction. An ideal setup would not only allow the team's 3D camera to track full bodies but also track any body type without any verbal direction.
Scan or click the QR code to see each team member's Linked In account.
Thanks to our team adviser and Senior Design Professor, Dr. Shayne McConomy. We would not
have been able to accomplish our project goals without his technical and professional guidance.
Thanks to FAMU-FSU College of Engineering Mechanical Engineering Department for the
opportunity to pursue academic excellence and also for allowing us to use school resources.
This is a timeline of what is coming up for our team and our project.