Last summer, interns Alex Kern and Andres Riofrio gave a 50-minute presentation on their cloud computing projects in front of a live audience of several hundred researchers from NASA's Jet Propulsion Laboratory, according to lead cloud architect Khawaja S. Shams. Here are brief descriptions of the two projects.
JPL is looking at how it might use the cloud to store the hundreds of terabytes, soon to be petabytes, of data it gathers from space. Kern's task was to figure out how to store such data cheaply, reliably and securely in a public cloud. The software he developed compresses the data, secures it, and allows it to be divided among three different public cloud vendors.
The most interesting feature of Kern's solution: If one third of the data is lost because of a failure at one cloud, it can be reconstituted by using the other two-thirds of the data in the other two locations. How? Kern can't explain more because the process is being patented.
Riofrio's project involved the Carbon in the Arctic Reservoir Vulnerability Experiment (CARVE). "It's basically an airplane that flies around Alaska and collects carbon samples," explains Shams. "It's very computationally intense, and very bursty."
Riofrio's project was to port certain algorithms used in CARVE, distributing them in parallel across many servers in the cloud. The project will save the mission hundreds of thousands of dollars by storing data in the cloud rather than buying and maintaining more dedicated servers in house, according to Shams.
"The code that he worked on and the capabilities he added to our code base have been incorporated already into the Mars Science Laboratory . . . that will be landing on Mars in August," he says. "That's a pretty big feat for anyone, let alone a student who's here for the summer."