I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Quote from: mlindner on 04/16/2021 09:12 amI'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.Hundreds of racks? That's still a small datacenter. Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling. I don't think many people really understand the industrial scale of large datacenters these days.Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.
Quote from: launchwatcher on 04/17/2021 02:56 pmQuote from: mlindner on 04/16/2021 09:12 amI'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.Hundreds of racks? That's still a small datacenter. Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling. I don't think many people really understand the industrial scale of large datacenters these days.Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.LEO puts you close 24/7/365 if you have a global mesh of laser linked satellites.
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.
Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
There was an application that I saw mentioned a while ago somewhere I cant recall that I thought might have some merit. It was a proposal to have a satellite 'data center' that would be placed into orbit around the moon or mars for instance then act as a processing/data storage offload for robotic missions in the vicinity. So you could have a rover on mars as an example that would live stream a camera feed up to the satellite, where you could use more energy intensive processing techniques like GPU based machine learning/AI stuff to determine things like path planning and higher level goal determination more rapidly than would be possible using the rovers onboard computation capabilities. This would (potentially) allow much faster traversing surface rovers while not requiring significantly more power, while maintaining the fast vision based path planning that makes operating a fast rover practical remotely (say on the order of 1 m/s). This sort of thing is obviously much more potentially beneficial for locations a long distance from earth, with a high round trip communication timeat risk of Reductio ad absurdum, you could likely get away with an off the shelf RC car and a webcam (with some insulation for the electronics/battery of course), while still being able to leverage the multi-kW processing capability of the orbiting data center
There was a NIAC 2021 proposal for a pony express style system of data hauling cyclers (functionally semi-mobile satellite datacenters) to shuttle data from far probes.Taking the station wagon full of tapes to the stars...But the NIAC proposal sounds like they are still doing onload/offload via laser, rather than physically picking up the equivalent of an Amazon Snowball from a probe.
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.In space you can't convect or conduct heat away, which are the two most efficient forms of heat dissipation. You can only radiate heat away which becomes very difficult with so much energy consumption going on. The size of the radiators are going to be massive, likely much larger than the solar panels used to collect the energy in the first place.Secondly you have radiation issues. Single bit upsets become huge issues and the hardware to handle that isn't cheap. Data corruption would become a massive issue given the density of the high performance compute. (As a percentage of volume, satellites have very little space dedicated to transistors.) The transistors would also be much smaller than is commonly used on spacecraft right now. DRAM would need to be some form of enhanced ECC memory more resiliant than even normal ECC.Data centers are already really hard to build properly. (For example the recent case of one burning down in Europe.)If you're looking for exotic places to build data centers, the new fad is to build them under water and underground because the water/earth insulates them from radiation effects and in water it becomes much easier to dissipate all that heat rather than having to pay for the rather extreme cooling systems modern data centers require. It can just use pumps to pull in external water, circulate it and then expel it. Putting them in space is the complete opposite of that both in terms of ease of access to electrical power, ease of cooling, and radiation environment. Data centers are being built next to hydroelectric dams so they can use the water for cooling and the cheap power from the dam. Data centers are also being built in places like iceland with it's low temperatures for air cooling, it's cheap geothermal power.It's NEVER going to be the case that we'll put data centers in space until we built the data centers there in the first place and there's a massive in-space human presence already.
Lisa Costa, chief technology and innovation officer of the U.S. Space Force, said the service is eyeing investments in edge computing, data centers in space and other technologies needed to build a digital infrastructure. “Clearly, the imperative for data driven, threat informed decisions is number one, and that means that we need computational and storage power in space, and high-speed resilient communications on orbit,” Costa said Jan. 13 at a virtual event hosted by GovConWire, a government contracting news site. <snip>A key goal of the Space Force is to be agile and “outpace our adversaries,” said Costa. Timely and relevant data is imperative, and that will require investments in government-owned and in commercial infrastructure in space, she added. “Things like cloud storage, elastic computing, critical computation for machine learning, infrastructure in and across orbits.”
The latest Apple Watch has 16 times the memory of the central processor on NASA’s Mars 2020 rover. For the new iPhone, 64 times the car-size rover’s memory comes standard.For decades, people dismissed comparisons of terrestrial and space-based processors by pointing out the harsh radiation and temperature extremes facing space-based electronics. Only components custom built for spaceflight and proven to function well after many years in orbit were considered resilient enough for multibillion-dollar space agency missions.While that may still be the best bet for high-profile deep space missions, spacecraft operating closer to Earth are adopting state-of-the-art onboard processors. Upcoming missions will require even greater computing capability.
Since traveling in February 2020 to the International Space Station, Spaceborne Computer-2 has completed 20 experiments focused on health care, communications, Earth observation and life sciences. Still, the queue for access to the off-the-shelf commercial computer linked to Microsoft’s Azure cloud keeps growing.Mark Fernandez, principal investigator for Spaceborne Computer-2, sees a promising future for space-based computing. He expects increasingly capable computers to be installed on satellites and housed in orbiting data centers in the coming years. Edge processors will crunch data on the moon, and NASA’s lunar Gateway will host advanced computing resources, Fernandez told SpaceNews.Fernandez, who holds a doctorate in scientific computing from the University of Southern Mississippi, served as software payload developer for HPE’s original Spaceborne Computer, a supercomputer that reached ISS in August 2017 and returned to Earth a year and a half later in a SpaceX Dragon cargo capsule.
A data processor launched to orbit by the Space Development Agency has performed an early demonstration of autonomous data fusion in space, said one of the companies supporting the experiment.Scientific Systems Company Inc. (SSCI) developed an artificial intelligence-enabled edge computer for the experiment known as POET, short for prototype on-orbit experimental testbed.The POET payload rode to orbit on a Loft Orbital satellite that launched June 30 on the SpaceX Transporter-2 rideshare mission.
I honestly think this does make sense, long-term.