Astrobotic has finally offered a good look at the vehicle that will carry scientific payloads to the lunar surface. The company has revealed the finished version of the Peregrine Moon lander ahead of its launch in the fourth quarter of the year. It’s an externally simple design that resembles an upside-down pot, but that will be enough to carry 24 missions that include 11 NASA items, a Carnegie Mellon rover, private cargo and even “cultural messages” from Earth.
Peregrine is slightly over six feet tall and can hold up to 100 kilograms (about 220 pounds on Earth). More importantly for customers, it’s relatively cheap— it’ll cost $1.2 million per kilogram to ferry payloads to the Moon’s surface ($300,000 to orbit). That sounds expensive, but it’s a bargain compared to the cost of rocket launches. SpaceX is currently charging $67 million for each Falcon 9 launch, and that ‘only’ reaches Earth orbit.
The Astrobotic team still has to finish integrating payloads, conduct environmental testing and ship Peregrine to Cape Canaveral, where it will launch aboard a ULA Vulcan Centaur rocket. The payloads are already integrated into the flight decks, however.
The machine should make history if and when it’s successful. Peregrine is expected to be the first US spacecraft to (properly) land on the Moon since the Apollo program ended. Past missions like Lunar Prospector, LCROSS, GRAIL and LADEE all ended with deliberate crashes. Astrobotic’s effort won’t be quite as momentous as the crewed Artemis landing, but it will help mark humanity’s renewed interest in a lunar presence.
Amazon announced today that it will open up its Amazon Prime delivery network to third-party retailers. The new service will be called “Buy With Prime”. […]
For humans, identifying items in a scene — whether that’s an avocado or an Aventador, a pile of mashed potatoes or an alien mothership — is as simple as looking at them. But for artificial intelligence and computer vision systems, developing a high-fidelity understanding of their surroundings takes a bit more effort. Well, a lot more effort. Around 800 hours of hand-labeling training images effort, if we’re being specific. To help machines better see the way people do, a team of researchers at MIT CSAIL in collaboration with Cornell University and Microsoft have developed STEGO, an algorithm able to identify images down to the individual pixel.
Normally, creating CV training data involves a human drawing boxes around specific objects within an image — say, a box around the dog sitting in a field of grass — and labeling those boxes with what’s inside (“dog”), so that the AI trained on it will be able to tell the dog from the grass. STEGO (Self-supervised Transformer with Energy-based Graph Optimization), conversely, uses a technique known as semantic segmentation, which applies a class label to each pixel in the image to give the AI a more accurate view of the world around it.
Whereas a labeled box would have the object plus other items in the surrounding pixels within the boxed-in boundary, semantic segmentation labels every pixel in the object, but only the pixels that comprise the object — you get just dog pixels, not dog pixels plus some grass too. It’s the machine learning equivalent of using the Smart Lasso in Photoshop versus the Rectangular Marquee tool.
The problem with this technique is one of scope. Conventional multi-shot supervised systems often demand thousands, if not hundreds of thousands, of labeled images with which to train the algorithm. Multiply that by the 65,536 individual pixels that make up even a single 256×256 image, all of which now need to be individually labeled as well, and the workload required quickly spirals into impossibility.
Instead, “STEGO looks for similar objects that appear throughout a dataset,” the CSAIL team wrote in a press release Thursday. “It then associates these similar objects together to construct a consistent view of the world across all of the images it learns from.”
“If you’re looking at oncological scans, the surface of planets, or high-resolution biological images, it’s hard to know what objects to look for without expert knowledge. In emerging domains, sometimes even human experts don’t know what the right objects should be,” MIT CSAIL PhD student, Microsoft Software Engineer, and the paper’s lead author Mark Hamilton said. “In these types of situations where you want to design a method to operate at the boundaries of science, you can’t rely on humans to figure it out before machines do.”
Trained on a wide variety of image domains — from home interiors to high altitude aerial shots — STEGO doubled the performance of previous semantic segmentation schemes, closely aligning with the image appraisals of the human control. What’s more, “when applied to driverless car datasets, STEGO successfully segmented out roads, people, and street signs with much higher resolution and granularity than previous systems. On images from space, the system broke down every single square foot of the surface of the Earth into roads, vegetation, and buildings,” the MIT CSAIL team wrote.
“In making a general tool for understanding potentially complicated data sets, we hope that this type of an algorithm can automate the scientific process of object discovery from images,” Hamilton said. “There’s a lot of different domains where human labeling would be prohibitively expensive, or humans simply don’t even know the specific structure, like in certain biological and astrophysical domains. We hope that future work enables application to a very broad scope of data sets. Since you don’t need any human labels, we can now start to apply ML tools more broadly.”
Despite its superior performance to the systems that came before it, STEGO does have limitations. For example, it can identify both pasta and grits as “food-stuffs” but doesn’t differentiate between them very well. It also gets confused by nonsensical images, such as a banana sitting on a phone receiver. Is this a food-stuff? Is this a pigeon? STEGO can’t tell. The team hopes to build a bit more flexibility into future iterations, allowing the system to identify objects under multiple classes.
NASA has pickedSpaceX, Amazon and four other American companies to develop the next generation of near-Earth space communication services meant to support its future missions. The agency started looking for partners under the Communication Services Project (CSP) in mid-2021, explaining that the use of commercially provided SATCOM will reduce costs and allow it to focus its efforts on deep space exploration and science missions.
“Adopting commercial SATCOM capabilities will empower missions to leverage private sector investment that far exceeds what government can do,” NASA wrote in the official project page. By using technology developed by commercial companies, the agency will have continued access to any innovation they incorporate into the system. At the moment, NASA relies on its Tracking and Data Relay Satellite (TDRS) system for near-Earth space communications. Many of its satellites were launched in the 80’s and 90’s, though, and it’s set to be decommissioned in the coming years.
The funded agreements under NASA’s Communication Services Project has a combined value of $278.5 million, with SpaceX getting the highest cut. NASA expects the companies to match and exceed its contribution during the five-year development period. SpaceX, which proposed a “commercial optical low-Earth orbiting relay network for high-rate SATCOM services,” has been awarded $69.95 million. Amazon’s Project Kuiper is getting the second-highest cut and has been awarded $67 million, while Viasat Incorporated has been awarded $53.3 million. The other three awardees are Telesat US Services ($30.65 million), SES Government Solutions ($28.96 million) and Inmarsat Government Inc. ($28.6 million).
All the participants are expected to be able to conduct in-space demonstrations by 2025 and show that their technology is capable of “new high-rate and high-capacity two-way communications.” NASA will sign multiple long-term contracts with the companies that succeed in developing effective communication technologies for near-Earth operations by 2030.
The long-established British tennis tournament Wimbledon announced yesterday that players from Russia and Belarus are not welcome to participate in this year’s edition of the […]