Terry Fong talks about the K10 rover test

Terry Fong is the Human Exploration Telerobotics Project Manager and Director of the Intelligent Robotics Group at NASA Ames Reserch Center in Mountain View, CA. During the NASA Social on Friday, he took a few moments to talk about the mission and the objectives.

One of the things the group is trying to accomplish is to characterize the system and how to build software on board station and on the robot. The second thing they are trying to accomplish is a simulation of a future lunar mission.

The simulation is based on a mission concept developed by the University of Colorado at Boulder. It is the idea of deploying a regular telescope on the far side of the moon to make observations on what is called the “Cosmic Dawn”, the very early history of the solar system. To be able to do that, they need to simulate how to use a robot to lay out a radio telescope. A series of tests look at the different parts of such a mission. Other tests include a ground survey, a deployment of a telescope, and an inspection of the telescope.

MOC-2-Sirus is one of the Mission Operations Centers at NASA Ames. It is used to support different kinds of missions. There are several lives feeds from the rover test area known as the roverscape. The roverscape is about the size of two football fields. The four-wheeled rover is named the K10. There is also a live feed of the International Space Station where Astronaut Luca Parmitano controls the rover.

Photo

The user interface Parmitano uses is an off the shelf Lenovo ThinkPad. The software is developed at Ames. The 3D user interface allows the crew to monitor the robot and displays information like the position, battery level, and health of the rover. It also shows video feeds from the robot.

Photo

The front row of people inside of the Mission Operations Center are on console. They communicate with mission control in Houston, Texas, as well as Marshall Space Flight Center in Huntsville, Alabama. They coordinate activities with Parmitano.

Photo

Other people in the room focus on things like engineering science. They study how difficult it is for the crew to use the system and how much situational awareness the crew has of the systems. They also characterize the amount of data that is sent up to and down from the ISS. There are a lot of things they are measuring to see how well the system works and how they can improve it.

The display the crew uses is a first-person view. Using a 3D display and a video feed, the crew can get a good view of where the robot is. If a video feed becomes unavailable or communications goes silent, the robot is autonomous and can continue performing a task. The rover can follow a set of commands without intervention from humans. When communication is reestablished, the rover sends all the information back to Earth and the ISS.

Using the TDRS network, a geosynchronous satellite system, they don’t have continuous coverage, though pretty good coverage. There are periods, including one during our visit, where there was a loss of signal. Because this is normal, they have procedures in place to accommodate these moments.

Photo

To improve continuous coverage, NASA is looking at scenarios where they could have a point-to-point radio link. That would include a spacecraft overhead in orbit communicating with the rover on the ground. Ideally, the spacecraft could remain stationary and maintain continuous coverage. The ability for the robot to operate by itself is important so it can remain productive even if we aren’t talking to it.

Fong finds it interesting that remotely operated robots are being used more and more. Telepresence robots allow humans to live in one part of the country and control a robot in another. NASA is doing something similar, except from space. This technology is allowing astronauts in orbit to have robots on the ground and gives humans the ability to expand our reach.

NASA is interested in other modalities for controlling the rover. Voice, gesture, and standard 2D and 3D interfaces are some modes they are researching. There is no one right way to operate a robot. A combination can be really important.

The robotics group considers K10 to be an live avatar. Using the camera views and other information, the operator has a pretty good sense that they are on the surface with the robot. You don’t need to have a fully immersive full body experience to feel present.

The reason why the rover is called K10 is because it is a predecessor to another rover named K9. K9 was named after a robotic dog on the show Dr. Who. It was called K9 because the chassis was created at JPL for another rover named FIDO. When the next version was built at Ames, they weren’t creative enough and decided to call it K10.

K9 was developed back in the late 1990s and early 2000s. The group at NASA Ames has been working on telerobotics for over 20 years.

Photo