Author Topic: Overclocked orbital data centers as a potential space industry?  (Read 10688 times)

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
Regarding the revenue streams for private development of new space industries, I'm most familiar with these:

 - Imagery of Earth, communications, and GPS
 - People will pay for trips into space, tourism or possibility colonization
 - We can mine heavy metals and bring them back to Earth
 - Energy from solar power beamed back to Earth
 - In-situ resource development for propellent, which developers of the other points will pay for

Feeling a bit underwhelmed by how convincing these are, I've been wondering if there's an overlooked revenue potential.  I want to know what people think of this proposal:

Build low-temperature data centers in space to do cloud computing

Why?  In short, because low-temperature processors have better performance.  By that I mean, they can have faster clock cycles.  Another motivation is that energy can be cheap in space, but this isn't very compelling.  On the other hand, it's possible to expel energy by radiators at a very low temperature, depending on the heat input and size of the radiator of course.  The physical underpinning of this idea is extremely well-established.  We know that processors can go faster at low temperatures, but the fundamental law is that any computational operation requires a minimum energy due to the laws of thermodynamics.  This energy is proportional to the temperature.

https://en.wikipedia.org/wiki/Landauer%27s_principle

http://www.extremetemperatureelectronics.com/tutorial1.html

Because the case for this proposal is firmly rooted in physical principles, it seems that it will become more significant in the future, as long as computations performed remotely continues to have value.  Even more important - progress in single-threaded processors has stalled.  Going to lower temperatures will allow the speeds of single processors to continue to advance.  In research, this has already happened.  The most powerful single thread computations have been done at supercooled temperatures.

Of course we can get these temperatures on Earth, but we can only do so with extra energy input via a thermal cycle.  This doesn't scale well to simultaneously push the gross FLOPS number while at the same time having the maximum possible speed.  That's where space would be necessary.  If you could passively cool a data center in space to an extremely low temperature then you could get a combination of of high speeds while at the same time low cost per FLOP.

If you imagine commodity computing, then there will be a price per calculation, and this price will be higher if the computation is performed faster.  Even if a supercomputer in space can never compete with the cost-per-computation on Earth, it doesn't matter.  It only needs to compete with a supercomputer on Earth that runs at the same speed - and that's a race that space data centers might be able to win 10 years from now.  Additionally, computer science has extremely robust arguments that establish why not all problems can be made parallel efficiently.  So we should expect the demand to remain strong.

The challenges would obviously be immense, which is why I'm posting this on a forum with rocket scientists.  The radiator design would be a nightmare, particularly if in LEO, and that is the most obvious place for it considering the time lag.  The JWST is in a spot where low temperatures are easier to manage, but it's not as good for providing data services.  In LEO, you would need both a sun shield and an Earth shield.  You could need completely new, radical, radiator designs.  Radiation would be an economics deal-breaker, so it would have to be shielded.  That's a problem, and I think it could only be solved by using resources transported from lunar or asteroid resources, considering the scale of shielding needed to reduce noise to a desirable level.  Then the computers themselves would have to be much lighter than the clusters we use today, combined with lower launch costs.

Nonetheless, I think this is a similar scale to what Planetary Resources is looking into doing.  If the asteroid mining business gets off the ground, I think that infrastructure could also be used to develop this (possibly significant) product of "bulk high-speed cloud computing".  This seems obviously important for making the sales pitch for private development of space.  I have not heard anyone else make this argument, so now I'm making it.
« Last Edit: 12/16/2013 03:40 PM by AlanSE »

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 32443
  • Cape Canaveral Spaceport
  • Liked: 11188
  • Likes Given: 331
The transportation, infrastructure and maintenance costs outweigh the temperature advantage.

Offline DMeader

  • Full Member
  • ****
  • Posts: 954
  • Liked: 100
  • Likes Given: 47
Also, for anything beyond LEO, consider the latency issues.

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
Also, for anything beyond LEO, consider the latency issues.

That's a meaningful detail.  For LEO itself, there are still bandwidth issues.  To the best of my understanding, we can achieve pretty good connectivity with things in LEO, but it can't hope to compare to the amount of data you can push through a stationary fiber cable on Earth.  There would be a premium on the amount of data you send to the satellite and that you get back from it.  But this is something I don't know as much about.  We have a large amount of data transmission with space telescopes and satellite communications, but "large" is subjective here.  I don't imagine it can compare to what the communications giants are doing.

Ideally, of course, any primitive versions of this would involve some time and space premium.  For instance, if this sort of thing could be used as a data relay with computations in-between.  That can have value far beyond what the brute computation itself is.  A popular example is financial trading bots.  People have proposed that a floating data center between NYC and London could exploit price differences that no one else has access to.

some about it toward the 13 minute mark in this video:

http://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html

This would obviously apply to satellites as well, and I would expect some early versions to do things like this (maybe they already do this).  However, I doubt there's much money to be made off of such small differences, and the thermodynamic argument can't be significant without some serious scaling up.

Offline Spugpow

  • Member
  • Posts: 22
  • Liked: 2
  • Likes Given: 3
Perhaps another use for banks of computers in space is to host sensitive data/legally dubious websites like wikileaks.

Offline LegendCJS

  • Full Member
  • ****
  • Posts: 575
  • Boston, MA
  • Liked: 7
  • Likes Given: 2
The fundamental assumption of yours that is is easy to cool things off when surrounded by the best insulator people know how to make i.e. vacuum, is seriously flawed.
Remember: if we want this whole space thing to work out we have to optimize for cost!

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
The fundamental assumption of yours that is is easy to cool things off when surrounded by the best insulator people know how to make i.e. vacuum, is seriously flawed.

I do hope you're familiar with (sigma * T^4).  If you'd like, I can write out the Carnot efficiency formula, which is relevant for running a data center on Earth at the same temperature.  With this and other equations, you could in somewhat short order produce a calculator which can compare the cost of running computations on this satellite versus its terrestrial counterpart given your assumptions about the prices for everything.

However, Jim assures me that he's done all of this, and even included realistic figures for the maintenance.  I eagerly await the mathematics that lead to his final inequality that he shared with us.

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 32443
  • Cape Canaveral Spaceport
  • Liked: 11188
  • Likes Given: 331

However, Jim assures me that he's done all of this, and even included realistic figures for the maintenance.  I eagerly await the mathematics that lead to his final inequality that he shared with us.

Not needed.  It blatantly obvious.  The little efficiency gained by the temperature advantages are grossly overshadowed by the logistics.  It doesn't take a rocket scientist to see it.

That is why you haven't heard why anybody has made the argument.

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 32443
  • Cape Canaveral Spaceport
  • Liked: 11188
  • Likes Given: 331
Regarding the revenue streams for private development of new space industries, I'm most familiar with these:

 - Imagery of Earth, communications, and GPS

The part of the list that is and may only remain viable

Offline D_Dom

  • Global Moderator
  • Full Member
  • *****
  • Posts: 497
  • Liked: 198
  • Likes Given: 112
I am glad to see you recognize the challenges are immense. Can you demonstrate a basic understanding of said challenges by providing data supporting your claims?
Avoid quoting wikipedia and show supporting evidence of
"energy can be cheap in space" or
"progress in single threaded processors has stalled" or
"passively cool a data center in space"

"I think it could only be solved by using resources transported from lunar or asteroid resources" is a reasonable statement, maybe we will see that capability exist in my lifetime, I certainly hope so.
Space is not merely a matter of life or death, it is considerably more important than that!

Offline IRobot

  • Full Member
  • ****
  • Posts: 1294
  • Portugal & Germany
  • Liked: 284
  • Likes Given: 255
Even more important - progress in single-threaded processors has stalled.  Going to lower temperatures will allow the speeds of single processors to continue to advance.  In research, this has already happened.  The most powerful single thread computations have been done at supercooled temperatures.
That's why real engineers invented multi thread programming and GPU computing, instead of sending computers to space.

Also although core speed has not increased much in the previous years, power consumption has been severely reduced.

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
Regarding the revenue streams for private development of new space industries, I'm most familiar with these:

 - Imagery of Earth, communications, and GPS

The part of the list that is and may only remain viable

I make no claim that the proposal would be more lucrative than the other items on the list.  The concept was only ever intended to only be interesting to people who are interested in those other points.


However, Jim assures me that he's done all of this, and even included realistic figures for the maintenance.  I eagerly await the mathematics that lead to his final inequality that he shared with us.

Not needed.  It blatantly obvious.  The little efficiency gained by the temperature advantages are grossly overshadowed by the logistics.  It doesn't take a rocket scientist to see it.

That is why you haven't heard why anybody has made the argument.

Will you clarify what you mean by "efficiency" in this context?  Some possibilities are:

 - Thermodynamic efficiency
 - The energy required per computation
 - The energy required per computation at a given temperature
 - Economic efficiency

I'm getting somewhat tired of being the only one here making references to actual physical laws and units.

I am glad to see you recognize the challenges are immense. Can you demonstrate a basic understanding of said challenges by providing data supporting your claims?
Avoid quoting wikipedia and show supporting evidence of
"energy can be cheap in space" or
"progress in single threaded processors has stalled" or
"passively cool a data center in space"

Processor speed has leveled off.  That was my intention with what I wrote, and I thought I provided elaboration on that point, but I'm always happy to give more clarification.  The phenomenon of leveling off of processor speeds is well documented.

http://www.gotw.ca/images/CPU.png

By "passive", I mean that it is not cooled by a thermal (cryogenic) cycle.  This would be the case if you demanded to run something on Earth at very low temperatures.  The JWST, for instance, will be about 50 degrees Kelvin.  Those temperature are very commonly achieved in labs, obviously, since liquid Hydrogen is even lower temperature.  You just don't get it passively, you put energy into a thermal cycle to sustain that temperature.  Any heat production (which computation will cause) has to be removed by that heat cycle, and you're penalized by the Coefficient of Performance (COP) ratio.  The lower temperature you go to, the higher that ratio is.  That's the case for Earth.  In space, for passive heat removal, the obvious physical constraint is a balance between the heat production, the radiator area, and the temperature.

I'm not arguing about the cost of delivering energy in space.  Even when I said that in my post, I said it wasn't compelling.  Advocates of space-based solar power transmitted via microwave would obviously maintain the position that it is cost-efficient.  My proposal, on the other hand, doesn't even directly require it.  There are several multipliers that would allow such a data center to make more money per the amount of energy it uses, compared to its ground-based counterpart.

Even more important - progress in single-threaded processors has stalled.  Going to lower temperatures will allow the speeds of single processors to continue to advance.  In research, this has already happened.  The most powerful single thread computations have been done at supercooled temperatures.
That's why real engineers invented multi thread programming and GPU computing, instead of sending computers to space.

Also although core speed has not increased much in the previous years, power consumption has been severely reduced.

There are two concepts here: clock speed and energy consumption.  Lower energy consumption is trivially better.  Faster clock speed is also desirable in a way that you may not have appreciated.  In computer science, Amdahl's law is a rule for quantifying the speedup you get from using multiple processors as opposed to one.  This is less than the number of processors.

https://en.wikipedia.org/wiki/Amdahl%27s_law

It is a very strong theoretical claim in computer science that 1 processor doing 10*N operations is superior to 10 processors each doing N operations.  That means that you can solve more problems with the first than you can with the latter.

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 32443
  • Cape Canaveral Spaceport
  • Liked: 11188
  • Likes Given: 331

I'm getting somewhat tired of being the only one here making references to actual physical laws and units.


Not my fault that you are only making references and not hard data supporting your claim nor is our fault that you are proposing a complex solution to a non existent problem. 

It is very easy to see the non viability.

If low temp computing is such a need or even desired, where are all the systems/installations for those can afford them like the military, NSA, national labs, etc.  They can pay for the cryogenic cooling if it was desired.

We can ignore that for a moment. 

What are the:
Costs to design and build a spacebased low temp computing platform
Costs to launch said platform
costs and logistics to maintain said platform (both the spacecraft portion (propellant and hardware) and payload portion (data storage and CPU's).  This will require serving spacecraft with inherent launches
costs of the comm infrastructure for said platform.  It would need a more robust system than NASA's TDRSS (more spacecraft and ground stations, larger spacecraft, etc)

Hmmmm, wait a minute. Instead of doing all these launches, lets just take the propellants and pressurants and the other cryogens used for the launches and send them to a cryogenic computing center.   That is what is meaning by efficiency.




Offline randomly

  • Full Member
  • ****
  • Posts: 518
  • Liked: 98
  • Likes Given: 40
The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

If you go the active cooling approach it becomes vastly easier to actually do it on earth, it would also be vastly cheaper, especially from a maintenance and upgrade point of view.

Also if there was some economic advantage you would see cryocooled processors in use, at least in niche applications. But you do not.

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
So economic efficiency.  I'm not asking trick questions, and your last comment was productive:

It is very easy to see the non viability.

If low temp computing is such a need or even desired, where are all the systems/installations for those can afford them like the military, NSA, national labs, etc.  They can pay for the cryogenic cooling if it was desired.

The proposal is to provide a commodity.  If any commodity can be delivered to market at the running price, then we should declare it to have solved a problem.  I presume that anyone who buys Platinum (for instance) on the market needed a metal for something they were doing.  In a decade from now, I imagine that cloud computing will be fully a commodity (but you're free to disagree with that assumption as well).  This is the central claim:

"If you could passively cool a data center in space to an extremely low temperature then you could get a combination of high speeds while at the same time low cost per FLOP."

I agree that we should consider this in a specific and comparative sense.  Actually, we're fairly close to converging on the criteria that must be satisfied for the concept to be viable.  All of the costs that are unique to operation in space must, at minimum, be lower than the energy costs of the cryo-cycle that runs a counterpart data center on Earth.

The case for this would be strengthened if, responding to market demand, we started to see many cryogenic data centers built on Earth.  That development would be an obvious indicator that orbiting data centers may be approaching profitability.  Currently, this is not the case.  That could change.  This proposal comes with connected predictions and equations that could be used to evaluate some comparative economics.  In other words, the best kind of proposal - a falsifiable one.

The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

But you've neglected Landauer's principle.  The energy you need to remove per computation decreases as you decrease temperature.  You're using thermal engineering, but you also have to use the thermodynamics of computation.  You have to combine BOTH.  There is no other way for the concept to make sense.  This is what you must add:

E = k T ln 2

This is the energy per computation.  The colder you get, the less energy you have to dissipate through the radiator.
« Last Edit: 12/16/2013 08:33 PM by AlanSE »

Offline IRobot

  • Full Member
  • ****
  • Posts: 1294
  • Portugal & Germany
  • Liked: 284
  • Likes Given: 255
You are also forgetting that servers require a lot of physical maintenance (server farms, not one) and that a LEO environment will produce several processing errors, not to mention material degrading.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 28559
  • Minnesota
  • Liked: 8495
  • Likes Given: 5539
I thought of a similar idea, as a way to do something useful with space-based solar power without having to beam the power. But the temperature problem is actually harder in space, since radiating heat is arguably a harder problem than on Earth where you have an atmosphere (or bodies of water) to easily dump heat to).

My idea was to perform latency-tolerant computations using the plentiful solar energy. The biggest problem here is if Moore's Law continues... by the time you've built your spacecraft and launched it and started operating it, a process that takes years, state of the art terrestrial processing power will have become significantly cheaper, meaning your advantage in theoretically lower cost power is lost. Also, it'd be hard to make a big, cheap radiator to dump heat that's roughly at room temperature.

However, if Moore's Law slows down (and especially if heat tolerant chips are cheap) and power becomes the most expensive input to computation by an order of magnitude, it might become worth it... You could put your celestial data center even closer to the Sun to collect solar power even cheaper. But the cost of building the radiator wouldn't improve by getting closer to the Sun (it'd get a bit worse, in fact).

However, it might be possible to run a computer that /requires/ cryogenic temperatures, like some sort of quantum computer or something operating with superconductors. It might be that such computers would still have a lot of waste heat, but rejecting waste heat is REALLY expensive at cryogenic temperatures...

However, there is one place that we've explored a bit that is cryogenic (~90 Kelvin, significantly lower than the critical temperature of some superconductors we've already developed) but actually has better heat rejection characteristics than the Earth's atmosphere... That place is Titan. Hopefully you can tolerate latencies measured in hours! :D
(But for some supercomputer simulations, that shouldn't be a problem.)
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 28559
  • Minnesota
  • Liked: 8495
  • Likes Given: 5539
...
But you've neglected Landauer's principle.  The energy you need to remove per computation decreases as you decrease temperature.  You're using thermal engineering, but you also have to use the thermodynamics of computation.  You have to combine BOTH.  There is no other way for the concept to make sense.  This is what you must add:

E = k T ln 2

This is the energy per computation.  The colder you get, the less energy you have to dissipate through the radiator.
This is true, however heat dissipated via radiation is proportional to the fourth power of temperature... which definitely beats the simple single power of temperature in your equation at some point. Radiator structure is going to be around the same order of magnitude as your solar array, if you're trying to reject heat at low temperatures. (as usual, the optimum will be somewhere in the middle...)


...on the other hand, if your computations are reversible, you don't actually need to reject any heat... ;)
« Last Edit: 12/16/2013 09:05 PM by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 28559
  • Minnesota
  • Liked: 8495
  • Likes Given: 5539
You are also forgetting that servers require a lot of physical maintenance (server farms, not one) and that a LEO environment will produce several processing errors, not to mention material degrading.
We can hand-wave that away. :) Amount of physical maintenance required is a design variable. Engineering choices determine how much maintenance is required. You can build a system that can operate for years (or even decades) with zero physical maintenance, and I've seen such systems marketed even for terrestrial data servers. You just need to do the right systems engineering. You just need enough spares (you can operate entire servers as spares, too... this is partly how Google works).

As far as processing errors and material degradation, well that's also quantifiable and something you can engineer. For processing errors: Parity checks, redundancy, watch-dog timers, inherent radiation resistance, and shielding are all possible ways to address the issue (and this can be an issue even on Earth... nobody sane runs a server without ECC these days, SSDs and HDs already include internal consistency checks, RAIDs are common place and a RAID-like architecture is used even inside an SSD, etc). Material degradation is just a typical satellite engineering constraint, no different from what current commsat providers need to consider.


But again, none of this is terribly relevant until Moore's Law slows way down.
« Last Edit: 12/16/2013 09:14 PM by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
You are also forgetting that servers require a lot of physical maintenance (server farms, not one) and that a LEO environment will produce several processing errors, not to mention material degrading.

I have very much forgotten the maintenance, because I have no relevant industry experience in this, and can not comment on it.

When you refer to the LEO environment, do you mean the radiation or something else?  I don't believe it would ever make sense without significant shielding.  Without a space industry, this would be an unreasonable mass requirement for lifting to orbit unless the computers could be made extremely small.  I guess I can't dismiss that possibility, but even if their size was 0, a literal point, and your shielding was spherical, you couldn't get the same radiation environment on Earth.  For 2 meters of shielding for a point-computer, that's already a 21 ton launch!

Perhaps there are other concerns for operating in a vacuum.

However, I wonder if people have fully appreciated that the computational limit is truly fundamental.  If optical computing became possible, it would still work better at low temperatures.

However, if Moore's Law slows down (and especially if heat tolerant chips are cheap) and power becomes the most expensive input to computation by an order of magnitude, it might become worth it... You could put your celestial data center even closer to the Sun to collect solar power even cheaper. But the cost of building the radiator wouldn't improve by getting closer to the Sun (it'd get a bit worse, in fact).

However, it might be possible to run a computer that /requires/ cryogenic temperatures, like some sort of quantum computer or something operating with superconductors. It might be that such computers would still have a lot of waste heat, but rejecting waste heat is REALLY expensive at cryogenic temperatures...

Two things here:

Moving closer to the sun will likely hurt, not help, on the basics of this proposal.  But if the radiator extends out into the umbra, then it becomes less clear.  Also, you can't "trick" nature by adding a thermal cycle.  Landauer's principle exists because you could use a thermal cycle to lower the temperature of a computer.  If computation was equally efficient at all temperatures, you could build a perpetual motion device.

Quantum computing already requires super cold temperatures.  These temperature are far less than the temperature of the CMB, so even in space it would require a thermal cycle.  That's why I did not propose it.
« Last Edit: 12/16/2013 09:23 PM by AlanSE »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 28559
  • Minnesota
  • Liked: 8495
  • Likes Given: 5539
Moving closer to the sun improves your power production via inverse square law, but it makes shielding your radiator more difficult. However, conventional computer chips operate in a fashion that can't be reduced immediately to the fundamental limit of Landauer's principle... (which governs an ideal device) Their operating power /isn't/ simply proportional to temperature, so operating them at a higher temperature may make sense until the error rate is too high.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Asteroza

  • Full Member
  • ****
  • Posts: 662
  • Liked: 88
  • Likes Given: 2
For comparison with cryogenic cooling for terrestrial datacenters, remember that liquid nitrogen retail is around the same cost as milk by volume, and for a datacenter that recycles it's coolant, there is marginal cooling costs associated with operating a large cryocooler. To compete with that requires either a substantial advantage of some sort, or some physical limitations that make on-orbit operations attractive.

Previously, image processing on-orbit may have been a market due to RF bandwidth limitations, especially to downlinks, as image resolution and quantity goes up, but with the recent LADEE demo of laser communication approaching 1Gbps, the incentive for on-orbit processing will effectively disappear for the short term.

High security via a citadel approach, such as basing both a hardware security module and a broadcasting high entropy source in orbit to prevent direct attacks, may be a marketable service, but that is trivial hardware in comparison to a full blown datacenter.

There has been talk of bitcoin mining rigs in cubesats/smallsats that can talk to each other (emulating the iridium constellation), in an effort to protect against node majority attacks effectively taking over the bitcoin network and distorting the blockchain ( a legitimate concern as there are allegations that a large state sponsored group is already trying to gain a majority node situation). It would also put sections of the bitcoin network and associated bitcoin exchanges out of the direct reach of conventional law enforcement for countries that oppose bitcoin provided launch providers are neutral or pro-bitcoin though. Which would probably limit you to ukrainian launches...

Long term cold storage of data would suggest a lunar far side subsurface installation due to the shielding necessary.

So really, some kind of supranational network infrastructure that can't be easily subverted, or some derivative thereof, seems to be the ideal market. A TOR satcom service perhaps?


Offline randomly

  • Full Member
  • ****
  • Posts: 518
  • Liked: 98
  • Likes Given: 40
The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

But you've neglected Landauer's principle.  The energy you need to remove per computation decreases as you decrease temperature.  You're using thermal engineering, but you also have to use the thermodynamics of computation.  You have to combine BOTH.  There is no other way for the concept to make sense.  This is what you must add:

E = k T ln 2

This is the energy per computation.  The colder you get, the less energy you have to dissipate through the radiator.
Again you are failing a basic reality check. Landauer's principle is irrelevant. Current computing technology operates at millions of times more energy per bit than Landauer's theoretical limits.

Overclocking processors by cooling them has nothing to do with Landauer's principle. Cooling modern CMOS processors reduces the interconnect resistance and the channel resistance of the FETs (the dominant effect) which allows higher current flows to drive the capacitance of each node reducing rise and fall times. It also reduces leakage currents which among other things helps switching thresholds. There are limits though, get the junction too cold and the carriers start to 'freeze' out and the channel resistance starts going up again which makes things worse.

Competitive overclockers get a great deal of their speed up by increasing supply voltages. However that's not really a viable technique for data centers because it drastically reduces reliability and lifetime of the IC's . As the current densities increase you hit electromigration limits and the electrons literally start to tear the physical conductors apart.

Real world systems at most use chilled water cooling.

You sound like me in my first years of college. You need some real world tempering.

"In Theory, Theory and Practice are the same. In Practice they're not."

Offline MP99

Even more important - progress in single-threaded processors has stalled.  Going to lower temperatures will allow the speeds of single processors to continue to advance.  In research, this has already happened.  The most powerful single thread computations have been done at supercooled temperatures.
That's why real engineers invented multi thread programming and GPU computing, instead of sending computers to space.

Also although core speed has not increased much in the previous years, power consumption has been severely reduced.

And the processors get more sophisticated, and the MIPS-to-GHz ratio improves.

cheers, Martin

Offline MP99

The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

If you go the active cooling approach it becomes vastly easier to actually do it on earth, it would also be vastly cheaper, especially from a maintenance and upgrade point of view.

Also if there was some economic advantage you would see cryocooled processors in use, at least in niche applications. But you do not.

Datacentres consume huge energy and put it all out a heat. ISTM that datacentre cooling might be eased if they switched to simple watercooling, but even that seems to be a step too far for the industry.

Simple water cooling will get you a fair way into the overclocking regime, and ISTR that Peltier and other active coolers easily can go beyond that.

But I see zero chance that businesses will put their critical infrastructure onto overclocked computers. Businesses want reliability, and that's what the clocking spec gives you.

cheers, Martin

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
For many data centers, about half the power consumption is for cooling.  The trend that would push for greater viability of this concept would be if that fraction rises in the future.  However, the only immediately relevant point is that it might be relevant to people working in advancement of processors as well as promoting space industries.

I encourage reading the link I included in the OP.  It is where I'm coming from regarding the temperature and performance parts of this concept.  According to that source, the following things happen with decreased temperature:

http://www.extremetemperatureelectronics.com/tutorial1.html

 - field-effect type transistors exhibit increased gain and lower leakage
 - parasitic resistance and capacitance decrease
 - heat transfer improves
 - many exhibit lower noise

Their part 3 contains more information about the "freeze-out" of carriers referenced by another user:

http://www.extremetemperatureelectronics.com/tutorial3.html

Corresponding points by randomly:

 - cooling reduces interconnect resistance and channel resistance of the FET
 - reduces leakage currents, helping switching thresholds
 - if the junction gets to cold, the carriers 'freeze' out

As articulated by my reference, the temperature at which freeze out occurs is comfortably below what you would ever hope to passively cool to in space with an over-sized radiator.  JWST is about 40 Kelvin, and there are major issues with going below this, particularly in LEO (meaning it is nigh impossible).  It would have to be much higher than this, although, still much lower than what you'd get putting the data center anywhere on Earth - that's the entire point.  The above points have relevance, and a great amount of intersection with the reference, although the relevance of increasing voltage gives me pause.

Saying that writing here is on the level on a college freshman shows that you are clearly not actively teaching at university level.  My advice is to keep in mind who your audience is.  This thread is currently on the 2nd page of Google results for "orbiting data center".  Whether that was your intention or not, you are contributing to the same information commons as Wikipedia does.  I do my best to write quality material for the online encyclopedia, so let that reflect on your assessment of its reliability as you will.

Speaking of related resources, here is an instance of a similar proposal:

http://server-sky.com/
http://spacejournal.ohio.edu/issue16/lofstrom.html

They've opted for ultra-thin and ultra-light computers, which is quite different from what I had in mind.  I expect that radiation could be a show stopper.  That's just one of the questions that comes up with their proposal.  I don't mean to endorse that, but the paper about it is relatively information-rich.  I have yet to parse their central economic motivation, although they start out with similar rhetoric about energy used for computation.

Following up on the talk about radiator area, I wrote out some of the specifics.  If you credit the temperature dependence of energy per computation, then the radiator area obviously follows a T^3 relationship.  Most thermal arguments in this thread could be put into these terms.

http://mathbin.net/521925

Neglecting other factors, the most economic temperature would then be dictated by the relative price per area of solar panels (if this is the energy source) compared to the price per area of the radiator.  Since the function that a radiator fills is the less sophisticated of the two, a radially cheaper radiator is at least plausible, making your best figure-of-merit occur at temperatures significantly lower than average Earth surface temperatures.

Thanks for all of the comments here.
« Last Edit: 12/17/2013 03:36 PM by AlanSE »

Offline Nilof

  • Full Member
  • ****
  • Posts: 914
  • Liked: 376
  • Likes Given: 540
So why would you want to send computers in space rather than in Antarctica?

Compared to LEO, Antarctica is colder, has heat sinks other than radiation, and can be linked to the rest of earth by optical fiber for higher bandwidth and lower average latency.

Radiating energy to cool yourself below 200K in LEO is an engineering challenge. The radiated energy per surface area will be minimal, and the radiator has to be carefully built to be shielded from both the hot earth below it and then sun. IR telescopes than need cooling have been sent into solar orbits or earth-sun L2 for a reason. Meanwhile, in Antarctica you basically get an infinite ~200K heat sink for free.
For a variable Isp spacecraft running at constant power and constant acceleration, the mass ratio is linear in delta-v.   Δv = ve0(MR-1). Or equivalently: Δv = vef PMF. Also, this is energy-optimal for a fixed delta-v and mass ratio.

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
Radiating energy to cool yourself below 200K in LEO is an engineering challenge. The radiated energy per surface area will be minimal, and the radiator has to be carefully built to be shielded from both the hot earth below it and then sun. IR telescopes than need cooling have been sent into solar orbits or earth-sun L2 for a reason. Meanwhile, in Antarctica you basically get an infinite ~200K heat sink for free.

Yes, that is the calculation that I just posted, except that I did it for 100 K.

That does neglect the shielding from the Earth and sun.  I've tried to carefully qualify my statements with that detail.  Viable designs for the shielding are quite difficult to think of, even theoretically.  At low orbits, the angle that the Earth takes up is huge.  For the sun, you'd prefer a stationary shield, which could be a strange ring shape (which would require seasonal corrections, depending on the orbital inclination).  But then sunlight striking the Earth shield can then reflect toward the radiator.
« Last Edit: 12/17/2013 03:49 PM by AlanSE »

Offline savuporo

  • Senior Member
  • *****
  • Posts: 5155
  • Liked: 985
  • Likes Given: 343
The entire concept is a nonstarter - for computing. Latency and bandwidth alone kill it and cooling and "free solar" energy are very small net benefits.

There MIGHT be a market for storage. Finding a physical data location for a safe backup of some types of critical data is actually always not that easy - but LEO is probably not a good location either. And when you are outside of LEO COTS computing equipment wont work so the cost structure simply wont close.

If you find a way to economically put a very large data storage at LEO on an orbit that doesnt decay, that first small debris impact is not going to take out, thre is a reasonable communication window with it for backup and retrieval type of operations, with enough data redundancy so that it can work on COTS equipement etc etc there might be a very small niche market for it.
Hard to close that business case.

« Last Edit: 12/17/2013 05:05 PM by savuporo »
Orion - the first and only manned not-too-deep-space craft

Offline muomega0

  • Full Member
  • ****
  • Posts: 861
  • Liked: 65
  • Likes Given: 1
Regarding the revenue streams for private development of new space industries I want to know what people think of this proposal:
Build low-temperature data centers in space to do cloud computing

Why?  In short, because low-temperature processors have better performance.  By that I mean, they can have faster clock cycles.  Another motivation is that energy can be cheap in space, but this isn't very compelling.  On the other hand, it's possible to expel energy by radiators at a very low temperature, depending on the heat input and size of the radiator of course.  The physical underpinning of this idea is extremely well-established.  We know that processors can go faster at low temperatures, but the fundamental law is that any computational operation requires a minimum energy due to the laws of thermodynamics. 

 The most powerful single thread computations have been done at supercooled temperatures.

If you imagine commodity computing, then there will be a price per calculation, and this price will be higher if the computation is performed faster.  Even if a supercomputer in space can never compete with the cost-per-computation on Earth, it doesn't matter.  It only needs to compete with a supercomputer on Earth that runs at the same speed - and that's a race that space data centers might be able to win 10 years from now.  Additionally, computer science has extremely robust arguments that establish why not all problems can be made parallel efficiently.  So we should expect the demand to remain strong.

It is cheaper to launch to LEO rather than GEO, http://en.wikipedia.org/wiki/Communications_satellite,
but what other advantages would a LEO data center provide (time lag comparison, etc) that someone would pay a premium over another service?  As Savuporo suggests, could video/data be uploaded and distributed faster from LEO rather than the current approach regardless of the overclocking?  Raising the altitude above ISS will significantly reduce reboost (say 550 nm) with minimal IMLEO mass penalty for the smaller LV (not HLV).

Would you not need two or three LEO data centers due to the ~ 90 min orbit or would it communicate to GEO?

It would help if you would break down the heat load by source since the all the components do not need to be at cryogenic temperatures.  For example, by cooling, CPU, data storage, power conversion, etc.


The radiator design would be a nightmare, particularly if in LEO, and that is the most obvious place for it considering the time lag.  The JWST is in a spot where low temperatures are easier to manage, but it's not as good for providing data services.  In LEO, you would need both a sun shield and an Earth shield.  You could need completely new, radical, radiator designs.  Radiation would be an economics deal-breaker, so it would have to be shielded. 

The concept is flawed because although you can passively achieve very low temperatures in space (eg JWST) this is only at very low energy flows. The only way to practically dissipate heat in space is via radiation and this obeys the Stefan-Boltzmann law that power radiated is proportional to the fourth power of temperature.

You will need to dissipate a great deal of heat which forces your radiators to be massive if you are trying to do this passively.

If you go the active cooling approach it becomes vastly easier to actually do it on earth, it would also be vastly cheaper, especially from a maintenance and upgrade point of view.
To try and put some numbers on what randomly stated, one would have to trade radiator size with power and minimized the amount of heat required at the colder temperatures  (if needed).

To shrink the radiator size, operate at a higher temperature. 
Q= sigma e A (t^4 - Tsink).   so at 10C, the area vs Wt is: (Tsink ==0 for simplicity)
10C:  1 kw   4     10 kW 36   75 kW  269   100 kW 359   1MW 3587  m2
130C          1                9               65                86            861

As a comparison, the ISS power module radiators reject about 6 kW with a peak of 14 kW (eclipse), ~3m x 14m, and weighs 740 kg.

To cool to lower temperatures and reject at a higher temperature, a refrigeration system is needed, which takes power.

Carnot tells us that with a sink temperature of 72F, a refrigeration system needs about 6.7 W/Wt while at -80F, the approximate effective sink temperature in LEO, half that power input, so 3.3 W/Wt.   At 260F, 10 W/Wt.      Perhaps a two phase bus would offer even better mass and power advantages.

Perhaps you can consider adding a data center/overclocked supercomputer and add another HSF leg to the best architecture that features a L2 Gateway and  LEO Zero Boiloff LH2 Depot that includes HSF satellite assembly and servicing at L2 and is not a one legged stool

HSF would be a great asset to service a LEO data center and the depot, but have not seen the economics yet.  Always looking out for a new low cost mission  ;)
« Last Edit: 12/18/2013 11:38 AM by muomega0 »

Offline grondilu

  • Full Member
  • ****
  • Posts: 563
  • France
  • Liked: 15
  • Likes Given: 5
If cooling processors could be done with radiation only, we'd put them in a large room with black walls.
Space is pretty much literally an astronomically-high hanging fruit.

Offline AlanSE

  • Full Member
  • *
  • Posts: 150
  • N Cp ln(T)
    • Gravity Balloon Space Habitats Inside Asteroids
  • Liked: 44
  • Likes Given: 31
If cooling processors could be done with radiation only, we'd put them in a large room with black walls.

You can't passively lower your temperature below that of your environment.  The proposal here is to put the data center in an environment that has a lower temperature than Earth's surface.  A radiator on Earth can't get lower temperatures because its surroundings radiate back to it.

You could make black sheet and point it at the night sky, but there's still air.  This air cools it by convection, which is much more effective than thermal radiation.  Even if you could insulate from that, the air also produces thermal radiation.  There are only a few wavelength "windows" to space in the atmosphere - and thermal radiation is not included.  This is why we have to observe IR from space.  Because of this, there's not even really any theoretical way to access temperatures much lower than ambient on Earth without an active thermal cycle.  Furthermore, actively cooling processors on Earth is fundamentally prohibited from increasing total energy efficiency of computation (including the thermal cycle) because that would violate the 2nd law of thermodynamics.  There still might be reasons to do this, but reducing energy use (by rule) cannot be one of them.

Even in space, the equilibrium blackbody temperature by something in LEO influenced only by Earth is something like -30 degrees C.  That's why even a "bare" radiator couldn't do what I've described.
« Last Edit: 12/19/2013 03:22 PM by AlanSE »

Offline grondilu

  • Full Member
  • ****
  • Posts: 563
  • France
  • Liked: 15
  • Likes Given: 5
You can't passively lower your temperature below that of your environment.  The proposal here is to put the data center in an environment that has a lower temperature than Earth's surface.  A radiator on Earth can't get lower temperatures because its surroundings radiate back to it

Sorry I ddin't get your point exactly.  I've just rred the thread more carefully and I understand now that you want the chips to operate at low temperature because then they will consume less energy that may be easier to radiate away.

But then, as randomly pointed out, the fourth power of the Stefan-Boltzman law kills you. 
« Last Edit: 12/19/2013 05:53 PM by grondilu »
Space is pretty much literally an astronomically-high hanging fruit.

Offline cordwainer

  • Full Member
  • ****
  • Posts: 563
  • Liked: 19
  • Likes Given: 7
The economics of getting all the requisite hardware in space for such a data center costs more than cryogenically cooling the same data center here on  Earth. Plenty of GIG's and supercomputers pay for cryogenic cooling Earthside. I don't see the benefit of putting a data center in space unless you are doing it for other reasons, like security and pre-processing of information for dedicated cellular links to individual users.(As in the case of portable satellite radio, tv and internet broadcast stations.)

Tags: