Author Topic: Overclocked orbital data centers as a potential space industry?  (Read 14340 times)

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39270
  • Minnesota
  • Liked: 25240
  • Likes Given: 12115
Moving closer to the sun improves your power production via inverse square law, but it makes shielding your radiator more difficult. However, conventional computer chips operate in a fashion that can't be reduced immediately to the fundamental limit of Landauer's principle... (which governs an ideal device) Their operating power /isn't/ simply proportional to temperature, so operating them at a higher temperature may make sense until the error rate is too high.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2836
  • Liked: 1084
  • Likes Given: 33
For comparison with cryogenic cooling for terrestrial datacenters, remember that liquid nitrogen retail is around the same cost as milk by volume, and for a datacenter that recycles it's coolant, there is marginal cooling costs associated with operating a large cryocooler. To compete with that requires either a substantial advantage of some sort, or some physical limitations that make on-orbit operations attractive.

Previously, image processing on-orbit may have been a market due to RF bandwidth limitations, especially to downlinks, as image resolution and quantity goes up, but with the recent LADEE demo of laser communication approaching 1Gbps, the incentive for on-orbit processing will effectively disappear for the short term.

High security via a citadel approach, such as basing both a hardware security module and a broadcasting high entropy source in orbit to prevent direct attacks, may be a marketable service, but that is trivial hardware in comparison to a full blown datacenter.

There has been talk of bitcoin mining rigs in cubesats/smallsats that can talk to each other (emulating the iridium constellation), in an effort to protect against node majority attacks effectively taking over the bitcoin network and distorting the blockchain ( a legitimate concern as there are allegations that a large state sponsored group is already trying to gain a majority node situation). It would also put sections of the bitcoin network and associated bitcoin exchanges out of the direct reach of conventional law enforcement for countries that oppose bitcoin provided launch providers are neutral or pro-bitcoin though. Which would probably limit you to ukrainian launches...

Long term cold storage of data would suggest a lunar far side subsurface installation due to the shielding necessary.

So really, some kind of supranational network infrastructure that can't be easily subverted, or some derivative thereof, seems to be the ideal market. A TOR satcom service perhaps?


Online randomly

  • Full Member
  • ****
  • Posts: 674
  • Liked: 326
  • Likes Given: 182
The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

But you've neglected Landauer's principle.  The energy you need to remove per computation decreases as you decrease temperature.  You're using thermal engineering, but you also have to use the thermodynamics of computation.  You have to combine BOTH.  There is no other way for the concept to make sense.  This is what you must add:

E = k T ln 2

This is the energy per computation.  The colder you get, the less energy you have to dissipate through the radiator.
Again you are failing a basic reality check. Landauer's principle is irrelevant. Current computing technology operates at millions of times more energy per bit than Landauer's theoretical limits.

Overclocking processors by cooling them has nothing to do with Landauer's principle. Cooling modern CMOS processors reduces the interconnect resistance and the channel resistance of the FETs (the dominant effect) which allows higher current flows to drive the capacitance of each node reducing rise and fall times. It also reduces leakage currents which among other things helps switching thresholds. There are limits though, get the junction too cold and the carriers start to 'freeze' out and the channel resistance starts going up again which makes things worse.

Competitive overclockers get a great deal of their speed up by increasing supply voltages. However that's not really a viable technique for data centers because it drastically reduces reliability and lifetime of the IC's . As the current densities increase you hit electromigration limits and the electrons literally start to tear the physical conductors apart.

Real world systems at most use chilled water cooling.

You sound like me in my first years of college. You need some real world tempering.

"In Theory, Theory and Practice are the same. In Practice they're not."

Online MP99

Even more important - progress in single-threaded processors has stalled.  Going to lower temperatures will allow the speeds of single processors to continue to advance.  In research, this has already happened.  The most powerful single thread computations have been done at supercooled temperatures.
That's why real engineers invented multi thread programming and GPU computing, instead of sending computers to space.

Also although core speed has not increased much in the previous years, power consumption has been severely reduced.

And the processors get more sophisticated, and the MIPS-to-GHz ratio improves.

cheers, Martin

Online MP99

The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

If you go the active cooling approach it becomes vastly easier to actually do it on earth, it would also be vastly cheaper, especially from a maintenance and upgrade point of view.

Also if there was some economic advantage you would see cryocooled processors in use, at least in niche applications. But you do not.

Datacentres consume huge energy and put it all out a heat. ISTM that datacentre cooling might be eased if they switched to simple watercooling, but even that seems to be a step too far for the industry.

Simple water cooling will get you a fair way into the overclocking regime, and ISTR that Peltier and other active coolers easily can go beyond that.

But I see zero chance that businesses will put their critical infrastructure onto overclocked computers. Businesses want reliability, and that's what the clocking spec gives you.

cheers, Martin

Offline AlanSE

  • Full Member
  • *
  • Posts: 153
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 54
  • Likes Given: 33
For many data centers, about half the power consumption is for cooling.  The trend that would push for greater viability of this concept would be if that fraction rises in the future.  However, the only immediately relevant point is that it might be relevant to people working in advancement of processors as well as promoting space industries.

I encourage reading the link I included in the OP.  It is where I'm coming from regarding the temperature and performance parts of this concept.  According to that source, the following things happen with decreased temperature:

http://www.extremetemperatureelectronics.com/tutorial1.html

 - field-effect type transistors exhibit increased gain and lower leakage
 - parasitic resistance and capacitance decrease
 - heat transfer improves
 - many exhibit lower noise

Their part 3 contains more information about the "freeze-out" of carriers referenced by another user:

http://www.extremetemperatureelectronics.com/tutorial3.html

Corresponding points by randomly:

 - cooling reduces interconnect resistance and channel resistance of the FET
 - reduces leakage currents, helping switching thresholds
 - if the junction gets to cold, the carriers 'freeze' out

As articulated by my reference, the temperature at which freeze out occurs is comfortably below what you would ever hope to passively cool to in space with an over-sized radiator.  JWST is about 40 Kelvin, and there are major issues with going below this, particularly in LEO (meaning it is nigh impossible).  It would have to be much higher than this, although, still much lower than what you'd get putting the data center anywhere on Earth - that's the entire point.  The above points have relevance, and a great amount of intersection with the reference, although the relevance of increasing voltage gives me pause.

Saying that writing here is on the level on a college freshman shows that you are clearly not actively teaching at university level.  My advice is to keep in mind who your audience is.  This thread is currently on the 2nd page of Google results for "orbiting data center".  Whether that was your intention or not, you are contributing to the same information commons as Wikipedia does.  I do my best to write quality material for the online encyclopedia, so let that reflect on your assessment of its reliability as you will.

Speaking of related resources, here is an instance of a similar proposal:

http://server-sky.com/
http://spacejournal.ohio.edu/issue16/lofstrom.html

They've opted for ultra-thin and ultra-light computers, which is quite different from what I had in mind.  I expect that radiation could be a show stopper.  That's just one of the questions that comes up with their proposal.  I don't mean to endorse that, but the paper about it is relatively information-rich.  I have yet to parse their central economic motivation, although they start out with similar rhetoric about energy used for computation.

Following up on the talk about radiator area, I wrote out some of the specifics.  If you credit the temperature dependence of energy per computation, then the radiator area obviously follows a T^3 relationship.  Most thermal arguments in this thread could be put into these terms.

http://mathbin.net/521925

Neglecting other factors, the most economic temperature would then be dictated by the relative price per area of solar panels (if this is the energy source) compared to the price per area of the radiator.  Since the function that a radiator fills is the less sophisticated of the two, a radially cheaper radiator is at least plausible, making your best figure-of-merit occur at temperatures significantly lower than average Earth surface temperatures.

Thanks for all of the comments here.
« Last Edit: 12/17/2013 03:36 pm by AlanSE »

Offline Nilof

  • Full Member
  • ****
  • Posts: 1177
  • Liked: 597
  • Likes Given: 707
So why would you want to send computers in space rather than in Antarctica?

Compared to LEO, Antarctica is colder, has heat sinks other than radiation, and can be linked to the rest of earth by optical fiber for higher bandwidth and lower average latency.

Radiating energy to cool yourself below 200K in LEO is an engineering challenge. The radiated energy per surface area will be minimal, and the radiator has to be carefully built to be shielded from both the hot earth below it and then sun. IR telescopes than need cooling have been sent into solar orbits or earth-sun L2 for a reason. Meanwhile, in Antarctica you basically get an infinite ~200K heat sink for free.
For a variable Isp spacecraft running at constant power and constant acceleration, the mass ratio is linear in delta-v.   Δv = ve0(MR-1). Or equivalently: Δv = vef PMF. Also, this is energy-optimal for a fixed delta-v and mass ratio.

Offline AlanSE

  • Full Member
  • *
  • Posts: 153
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 54
  • Likes Given: 33
Radiating energy to cool yourself below 200K in LEO is an engineering challenge. The radiated energy per surface area will be minimal, and the radiator has to be carefully built to be shielded from both the hot earth below it and then sun. IR telescopes than need cooling have been sent into solar orbits or earth-sun L2 for a reason. Meanwhile, in Antarctica you basically get an infinite ~200K heat sink for free.

Yes, that is the calculation that I just posted, except that I did it for 100 K.

That does neglect the shielding from the Earth and sun.  I've tried to carefully qualify my statements with that detail.  Viable designs for the shielding are quite difficult to think of, even theoretically.  At low orbits, the angle that the Earth takes up is huge.  For the sun, you'd prefer a stationary shield, which could be a strange ring shape (which would require seasonal corrections, depending on the orbital inclination).  But then sunlight striking the Earth shield can then reflect toward the radiator.
« Last Edit: 12/17/2013 03:49 pm by AlanSE »

Offline savuporo

  • Senior Member
  • *****
  • Posts: 5152
  • Liked: 1002
  • Likes Given: 342
The entire concept is a nonstarter - for computing. Latency and bandwidth alone kill it and cooling and "free solar" energy are very small net benefits.

There MIGHT be a market for storage. Finding a physical data location for a safe backup of some types of critical data is actually always not that easy - but LEO is probably not a good location either. And when you are outside of LEO COTS computing equipment wont work so the cost structure simply wont close.

If you find a way to economically put a very large data storage at LEO on an orbit that doesnt decay, that first small debris impact is not going to take out, thre is a reasonable communication window with it for backup and retrieval type of operations, with enough data redundancy so that it can work on COTS equipement etc etc there might be a very small niche market for it.
Hard to close that business case.

« Last Edit: 12/17/2013 05:05 pm by savuporo »
Orion - the first and only manned not-too-deep-space craft

Offline muomega0

  • Full Member
  • ****
  • Posts: 862
  • Liked: 70
  • Likes Given: 1
Regarding the revenue streams for private development of new space industries I want to know what people think of this proposal:
Build low-temperature data centers in space to do cloud computing

Why?  In short, because low-temperature processors have better performance.  By that I mean, they can have faster clock cycles.  Another motivation is that energy can be cheap in space, but this isn't very compelling.  On the other hand, it's possible to expel energy by radiators at a very low temperature, depending on the heat input and size of the radiator of course.  The physical underpinning of this idea is extremely well-established.  We know that processors can go faster at low temperatures, but the fundamental law is that any computational operation requires a minimum energy due to the laws of thermodynamics. 

 The most powerful single thread computations have been done at supercooled temperatures.

If you imagine commodity computing, then there will be a price per calculation, and this price will be higher if the computation is performed faster.  Even if a supercomputer in space can never compete with the cost-per-computation on Earth, it doesn't matter.  It only needs to compete with a supercomputer on Earth that runs at the same speed - and that's a race that space data centers might be able to win 10 years from now.  Additionally, computer science has extremely robust arguments that establish why not all problems can be made parallel efficiently.  So we should expect the demand to remain strong.

It is cheaper to launch to LEO rather than GEO, http://en.wikipedia.org/wiki/Communications_satellite,
but what other advantages would a LEO data center provide (time lag comparison, etc) that someone would pay a premium over another service?  As Savuporo suggests, could video/data be uploaded and distributed faster from LEO rather than the current approach regardless of the overclocking?  Raising the altitude above ISS will significantly reduce reboost (say 550 nm) with minimal IMLEO mass penalty for the smaller LV (not HLV).

Would you not need two or three LEO data centers due to the ~ 90 min orbit or would it communicate to GEO?

It would help if you would break down the heat load by source since the all the components do not need to be at cryogenic temperatures.  For example, by cooling, CPU, data storage, power conversion, etc.


The radiator design would be a nightmare, particularly if in LEO, and that is the most obvious place for it considering the time lag.  The JWST is in a spot where low temperatures are easier to manage, but it's not as good for providing data services.  In LEO, you would need both a sun shield and an Earth shield.  You could need completely new, radical, radiator designs.  Radiation would be an economics deal-breaker, so it would have to be shielded. 

The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

If you go the active cooling approach it becomes vastly easier to actually do it on earth, it would also be vastly cheaper, especially from a maintenance and upgrade point of view.
To try and put some numbers on what randomly stated, one would have to trade radiator size with power and minimized the amount of heat required at the colder temperatures  (if needed).

To shrink the radiator size, operate at a higher temperature. 
Q= sigma e A (t^4 - Tsink).   so at 10C, the area vs Wt is: (Tsink ==0 for simplicity)
10C:  1 kw   4     10 kW 36   75 kW  269   100 kW 359   1MW 3587  m2
130C          1                9               65                86            861

As a comparison, the ISS power module radiators reject about 6 kW with a peak of 14 kW (eclipse), ~3m x 14m, and weighs 740 kg.

To cool to lower temperatures and reject at a higher temperature, a refrigeration system is needed, which takes power.

Carnot tells us that with a sink temperature of 72F, a refrigeration system needs about 6.7 W/Wt while at -80F, the approximate effective sink temperature in LEO, half that power input, so 3.3 W/Wt.   At 260F, 10 W/Wt.      Perhaps a two phase bus would offer even better mass and power advantages.

Perhaps you can consider adding a data center/overclocked supercomputer and add another HSF leg to the best architecture that features a L2 Gateway and  LEO Zero Boiloff LH2 Depot that includes HSF satellite assembly and servicing at L2 and is not a one legged stool

HSF would be a great asset to service a LEO data center and the depot, but have not seen the economics yet.  Always looking out for a new low cost mission  ;)
« Last Edit: 12/18/2013 11:38 am by muomega0 »

Offline grondilu

  • Full Member
  • ****
  • Posts: 613
  • France
  • Liked: 68
  • Likes Given: 14
If cooling processors could be done with radiation only, we'd put them in a large room with black walls.

Offline AlanSE

  • Full Member
  • *
  • Posts: 153
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 54
  • Likes Given: 33
If cooling processors could be done with radiation only, we'd put them in a large room with black walls.

You can't passively lower your temperature below that of your environment.  The proposal here is to put the data center in an environment that has a lower temperature than Earth's surface.  A radiator on Earth can't get lower temperatures because its surroundings radiate back to it.

You could make black sheet and point it at the night sky, but there's still air.  This air cools it by convection, which is much more effective than thermal radiation.  Even if you could insulate from that, the air also produces thermal radiation.  There are only a few wavelength "windows" to space in the atmosphere - and thermal radiation is not included.  This is why we have to observe IR from space.  Because of this, there's not even really any theoretical way to access temperatures much lower than ambient on Earth without an active thermal cycle.  Furthermore, actively cooling processors on Earth is fundamentally prohibited from increasing total energy efficiency of computation (including the thermal cycle) because that would violate the 2nd law of thermodynamics.  There still might be reasons to do this, but reducing energy use (by rule) cannot be one of them.

Even in space, the equilibrium blackbody temperature by something in LEO influenced only by Earth is something like -30 degrees C.  That's why even a "bare" radiator couldn't do what I've described.
« Last Edit: 12/19/2013 03:22 pm by AlanSE »

Offline grondilu

  • Full Member
  • ****
  • Posts: 613
  • France
  • Liked: 68
  • Likes Given: 14
You can't passively lower your temperature below that of your environment.  The proposal here is to put the data center in an environment that has a lower temperature than Earth's surface.  A radiator on Earth can't get lower temperatures because its surroundings radiate back to it

Sorry I ddin't get your point exactly.  I've just rred the thread more carefully and I understand now that you want the chips to operate at low temperature because then they will consume less energy that may be easier to radiate away.

But then, as randomly pointed out, the fourth power of the Stefan-Boltzman law kills you. 
« Last Edit: 12/19/2013 05:53 pm by grondilu »

Offline cordwainer

  • Full Member
  • ****
  • Posts: 563
  • Liked: 19
  • Likes Given: 7
The economics of getting all the requisite hardware in space for such a data center costs more than cryogenically cooling the same data center here on  Earth. Plenty of GIG's and supercomputers pay for cryogenic cooling Earthside. I don't see the benefit of putting a data center in space unless you are doing it for other reasons, like security and pre-processing of information for dedicated cellular links to individual users.(As in the case of portable satellite radio, tv and internet broadcast stations.)

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0