Author Topic: Design a mission to Proxima b  (Read 20939 times)

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 26443
  • Minnesota
  • Liked: 6356
  • Likes Given: 4632
Re: Design a mission to Proxima b
« Reply #160 on: 02/04/2017 02:01 AM »
Meh, this is interstellar travel we're talking about. Isotopic separation of each atom of the craft is easier than the rest of the task.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline rdheld

  • Full Member
  • *
  • Posts: 142
  • Liked: 3
  • Likes Given: 5
Re: Design a mission to Proxima b
« Reply #161 on: 02/09/2017 11:16 AM »
estimate for planetary properties

Online Stormbringer

  • Full Member
  • ****
  • Posts: 1278
  • Liked: 220
  • Likes Given: 80
When antigravity is outlawed only outlaws will have antigravity.

Offline Propylox

  • Full Member
  • *
  • Posts: 100
  • Colorado
  • Liked: 8
  • Likes Given: 6
Re: Design a mission to Proxima b
« Reply #163 on: 07/20/2017 08:17 AM »
------------------   Zombie Thread --------------------
Why stop at Proxima b? Consider the design requirements for any such mission;
1- A giant curtain array and a large optical telescope for observation
2- Advanced star tracking that can orient itself in interstellar space
3- Nuclear power and a layered approach to propulsion
4- The ability to identify and avoid objects as well as estimate and navigate multiple gravitational bodies. Due to the distance, whatever the last course changes Earth sends based on the information the probe gathered before that, the spacecraft will have to modify or ignore the instructions and chart itself based on updated and enhanced data. It may bend itself around smaller planets and the star, but should steer clear of large planets, their systems and the gravitational consequences avoidance maneuvers would have on trajectory.

With such capabilities just to get to Proxima b, the capability exists to fly through the system, around the star and onto another. Here's three missions, each about fifteen lightyears long. Double-check my trig ::)

A) to Alpha Centari= 4.365ly , to Luhman 16= 3.673ly , to WISE 0855-0714= 6.005ly ; Total = 14.057ly
Solar masses of 1.10, 0.907 and a 0.123 Red Dwarf + 0.045 and 0.040 Brown Dwarfs + a 0.0059 sub-BD

B) to Sirius= 8.583ly , to Procyon= 5.200ly , to Luyten's Star= 1.120ly ; Total = 14.903ly
Solar masses of 2.02 and a 0.978 White Dwarf + 1.50 and a 0.602 White Dwarf + a 0.26 Red Dwarf

C) to Epsilon Eridian= 10.522 , to Tau Ceti= 5.450 ; Total = 15.972ly
Solar masses of 0.82 + 0.783, both with habitable planetary systems
« Last Edit: 07/20/2017 08:20 AM by Propylox »

Offline Paul451

  • Full Member
  • ****
  • Posts: 1166
  • Australia
  • Liked: 570
  • Likes Given: 494
Re: Design a mission to Proxima b
« Reply #164 on: 07/20/2017 09:08 PM »
but should steer clear of large planets, their systems and the gravitational consequences avoidance maneuvers would have on trajectory.

Given the times of such trips, even at modest fractions of 'c', there's sufficient time for identification of major planets to be sent to Earth and analysis of their orbits sent back before the probe enters the system proper. (Even moreso if the probe is decelerating into the system, rather than just a flyby.) That allows a continuously improving navigational map to be developed on Earth and uploaded to the probe (written in a way that aids the probe in making autonomous decisions about guidance). The map might include estimates of things like debris density, based on human devised theories and super-computer simulations. The map would also have a constantly refined list of priorities, the realisation of which feeds back into the accuracy of theory and simulations, which constantly improves the accuracy and utility of the "map".

That allows the probe to be vastly dumber than would be required for true "fire-and-forget" autonomy.

(It's like giving a rover a map of the main features of a landscape, including a representation of the gross terrain difficulty designed to simplify on-board processing, plus a specific target and a first-estimate optimal path to get there. Then let the rover handle the close-up navigation around rocks and dips along that path.)

Offline Propylox

  • Full Member
  • *
  • Posts: 100
  • Colorado
  • Liked: 8
  • Likes Given: 6
Re: Design a mission to Proxima b
« Reply #165 on: 07/21/2017 02:12 AM »
Given the times of such trips, even at modest fractions of 'c', there's sufficient time for identification of major planets to be sent to Earth and analysis of their orbits sent back before the probe enters the system proper. ... That allows the probe to be vastly dumber than would be required for true "fire-and-forget" autonomy.
I believe you've greatly underestimated the minimum requirements for such a craft and placed far too much reliance on desk jockeys. For example;

1) The primary trajectory is to a star, around it and to another. The craft must make final gravitational calculations of the star and plot a trajectory. There won't be time to receive this from Earth. The craft must be able to gauge distance to the target star as well as beacon stars.
Any large planets should be avoided, whether they're predetermined or discovered in flight. The craft should be going fast enough that large planets at a distance won't have much effect, but gravitational measurements will filter in during the star's calculations and should be accounted for during inbound and outbound flight.

2) Even at the closest target of Alpha Centari (4.365ly) the craft would be entering the system with information or commands from Earth that would be seven years old and containing inaccurate, incomplete and irrelevant assumptions. The craft may have found additional planets (some may be out of plane in binary systems, which current theory doesn't acknowledge), corrected orbital positions and gravity of others and identified hazards close to the star (or on the star's surface) that require greater course corrections.

3) Whether these calculations are done by the craft or on Earth - they're the same algorithms. The main difference is processing power. Using Earth's best supercomputers would still take seven years to retrieve while the craft could perform the task in at least 1-2yrs, but using up-to-date and refined information. Even a relatively "dumb" craft would be wiser and more accurate than armies of desk jockeys and their finest supercomputers on Earth.

4) Lastly is the desire for close flybys of smaller planets that won't be discovered in time to inform Earth. It'd be a lucky break if one was near the flightpath as the craft's basic systems are already equipped to measure its gravity, the resulting changes in trajectory and any maneuvers required.

-- In no way would commands or suggestions from Earth be helpful. If the craft is capable of such a mission, it's inherently capable and more equipped to make it's own adjustments. Inversely, if it requires or would even benefit from desk jockeys, it's not capable of the mission = minimum requirements

Offline Paul451

  • Full Member
  • ****
  • Posts: 1166
  • Australia
  • Liked: 570
  • Likes Given: 494
Re: Design a mission to Proxima b
« Reply #166 on: 07/22/2017 02:20 AM »
too much reliance on desk jockeys [...] armies of desk jockeys [...] from desk jockeys

Oh stop trying to be a smart-ass. I'm talking about science. There are thousands of astronomers and planetary scientists working on theories of stellar and planetary formation, using constantly improving data on a huge number of star systems, from hundreds of major observation facilities on Earth and in orbit.

For an isolated probe to be able to match that, you require a level of intelligence that is measured in multiples of human intelligence. A sentient hyper-intelligent being. And that pushes the topic into pure SF.

There won't be time to receive this from Earth.

Even at unrealistic velocities, such as 10% of 'c', a trip to Proxima Centauri would take more than four decades. During which time, the entire theory of planetary formation will be revised and refined repeatedly. And during which, observers on Earth will be using vastly superior platforms to continue to observe the target system, and will thus have a vastly greater understanding of that system than when the probe was launched. Moreso, the things that they will want the probe to observe in the system will change during that time, in a way that can't be predicted at launch.

During the next leg, the trip will again take decades. That is time for the data from the first flyby to trickled back to Earth, integrated into the scientific models of planetary formation, cause of major shift in the interpretation of data from other planetary systems, further refine the models of that next target system, with plenty of time for refinements of understanding of that target to be sent back to the probe, long before it hits the flyby window.

[The exception would be Proxima -> Alpha Centauri at 10% of 'c'. But at 1% or less there'd be enough time for the observations from Proxima to be sent back to Earth, digested for a year or so, and the results uploaded back to the probe before it enters AC.]

Throw in that any propulsion technology capable of getting to another star system, even at less than 1% of 'c', is more than sufficient to push a much, much larger observation platform to the appropriate Solar gravitational focus for that same target. We will inevitable have a program at the gravitational focus operating long before any interstellar mission is launched. (And you would want to use such an array to receive data from the probe, since it will drastically reduce the probe's power requirements.) And since the nature of gravitational lensing is that it moves the apparent observation point to the same distance from the target as the observer is from the "lens" (in this case, it approximates observation as if from 600-1000 AU from the target star), and if the array at the gravitational focus is larger than the probe's telescopes, even double the resolution means the probe itself won't actually be contributing anything useful until its at around 300AU from the system.

Hence even data and modelling algos 4+ years old will be decades ahead of anything programmed into the probe before launch, right up until the probe is a light-day from the target; and realistically, not until the probe is a few light-hours from the target. Only then will observations by the probe exceed that already known to Earth's scientists. And the few days the probe spends flying through the 100AU window where its observations are useful is not sufficient for any on-board re-calculation of the major gravitational map of the target system. Only very minor adjustments (like avoiding unmapped rocks) will be possible. Even picking targets of observation will be fairly locked down before that primary flyby, there's no way the probe could recalculate the rules-of-engagement in the time it has.

[And if the probe is decelerating into a target system, intending to stay, then there's sufficient time for even observations from the probe itself, well within 300AU, to be folded into Earth-side knowledge and sent back to the probe before it reaches the actually planetary system.]

Whether these calculations are done by the craft or on Earth - they're the same algorithms. The main difference is processing power. Using Earth's best supercomputers would still take seven years to retrieve while the craft could perform the task in at least 1-2yrs

The probe's computing power not be 1/7th or 2/7th of a super-computer. It will need to be hardened against decades of radiation, largely self-repairing, and use as little energy as physically possible. On the day of launch, it will be less powerful than a good desktop computer of the day.

If anything like Moore's Law continues, then the probe's computer will be at least three and as much as seven orders of magnitude slower than an equivalent Earth computer by the time it reaches the target system. So you'd be able to run the probe's software on a virtual-computer instance running on your watch.

And even if Moore's Law breaks down at some point, Dennard's Law would continue a little longer as the final chip-scale is made more energy efficient, so the class of computers on the probe will continue improving for a couple of decades. The probe will always be behind.

Additionally, the craft will be performing other continuous functions: in order to perform observations; in order to maintain tracking to Earth (especially if the comms array is used as an instrument for observations, so is being constantly re-pointed); in order to compress mountains of data into the most compact (but error-resistant) form for transmission to Earth; plus the calculations necessary to determine its own position and course accurately enough.

During the narrow window of the flyby proper, the probe will not be doing any major processing of gravitational models of the target system based on observations made during the flyby. And between flybys, there's decades of time for Earth-based refinements to be sent to the probe. Therefore there's no point even trying to give the probe that capacity, the probe's autonomy would be limited to making minor adjustments during the flyby to avoid hitting anything and to compensate for errors in trajectory (after the fact) due to errors in the gravitational-map.



And all that's at an unrealistic 10% of 'c'. At a more realistic velocity of 1% of 'c', the mission will take long enough for speed of light delays to be trivial.

The premise of your obnoxious "desk jockeys" snipe assumes that technology in the probe is decades ahead of anything used on Earth, and which is subsequently not used on any other Earth-based or Solar-System based astronomy, science, or computing; and that Earth science effectively stops advancing at the moment the probe is launched.

And that defies any logic or reason.

Offline Propylox

  • Full Member
  • *
  • Posts: 100
  • Colorado
  • Liked: 8
  • Likes Given: 6
Re: Design a mission to Proxima b
« Reply #167 on: 07/22/2017 10:29 PM »
There are thousands of astronomers and planetary scientists working on theories of stellar and planetary formation, using constantly improving data on a huge number of star systems, from hundreds of major observation facilities on Earth and in orbit. ...
Even at unrealistic velocities, such as 10% of 'c', a trip to Proxima Centauri would take more than four decades. During which time, the entire theory of planetary formation will be revised and refined repeatedly. And during which, observers on Earth will be using vastly superior platforms to continue to observe the target system, and will thus have a vastly greater understanding of that system than when the probe was launched.
And none of that is relevant to the mission. In several decades the probe will return facts confirming some theories and discrediting others. Until then these armies and their infrastructure can continue guessing.
Consider it job security - waiting decades to know if you've been barking up the wrong tree.

Whether these calculations are done by the craft or on Earth - they're the same algorithms. The main difference is processing power. Using Earth's best supercomputers would still take seven years to retrieve while the craft could perform the task in at least 1-2yrs
The probe's computing power not be 1/7th or 2/7th of a super-computer. ... If anything like Moore's Law ... Dennard's Law ... Therefore there's no point even trying to give the probe that capacity, the probe's autonomy would be limited to making minor adjustments during the flyby to avoid hitting anything and to compensate for errors in trajectory (after the fact) due to errors in the gravitational-map. ...
The emphasis isn't on Earth's computing power or projected advancements, but how the shear distance negates its relevance. Previously addressed in this thread;
All the probe needs to do is collect data and relay it to Earth.
Not if you want to collect useful data. It needs to autonomously identify and select targets, select appropriate observations, and execute the observations with incredible speed, precision and reliability.
New Horizons, when it entered flyby phase, was entirely autonomous for the same reason as a probe to Proxima b would be. It was given all its instructions of where to look beforehand. ...
Assuming Earth-based calculations can be performed instantly based on data sent by the probe; by the time that data is sent from the probe to Earth and back it's at least seven years old when nearing AC. In that time a "dumb" probe would have already completed the calculations, would have several years of more accurate data than what was sent to Earth and calculated it.
Earth's people and processing power can therefore only provide incomplete, inaccurate calculations. This is vastly different than New Horizon's flyby which was orchestrated from Earth using recent probe data.


------------------- On Propulsion --------------------
Why stop at Proxima b? Consider the design requirements for any such mission;
1- A giant curtain array and a large optical telescope for observation
3- Nuclear power and a layered approach to propulsion

The curtain array is inherently an electric sail when leaving our solar system with the greatest thrust achieved near the Sun. There's also the possibility (DADT) of riding a directed CME once some velocity and distance from the Sun has been built up.
Deep Oberth burn right by the Sun, leave at ~100-120km/s (perhaps with help from RTG electric propulsion).
A "layered approach" must begin with an energy source as massive as the Sun before any self-propulsion takes over. Once out of the inner solar system the curtain array would begin radar sounding for avoidance maneuvers. This continues, to varying degrees, until it's mapping the target system.
A nuclear power source is required for operating the radar as well as propulsion and  calculations. And like observing a lantern drifting through a dark forest, Earth's receivers should be able to directly observe these radar reflections - though the probe's will be in much higher resolution.
When near (1 to 1.5ly) the target system the optical telescope would retract its shield to begin cataloguing planets, their orbits, mass assumptions and volatility of the star. This is the initial point where trajectory calculations can begin.
« Last Edit: 07/22/2017 10:31 PM by Propylox »

Offline Paul451

  • Full Member
  • ****
  • Posts: 1166
  • Australia
  • Liked: 570
  • Likes Given: 494
Re: Design a mission to Proxima b
« Reply #168 on: 07/23/2017 12:17 AM »
There are thousands of astronomers and planetary scientists working on theories of stellar and planetary formation, using constantly improving data on a huge number of star systems, from hundreds of major observation facilities on Earth and in orbit. ...
Even at unrealistic velocities, such as 10% of 'c', a trip to Proxima Centauri would take more than four decades. During which time, the entire theory of planetary formation will be revised and refined repeatedly. And during which, observers on Earth will be using vastly superior platforms to continue to observe the target system, and will thus have a vastly greater understanding of that system than when the probe was launched.
And none of that is relevant to the mission. In several decades the probe will return facts confirming some theories and discrediting others. Until then these armies and their infrastructure can continue guessing.

Bizarre. You really have no respect for science, yet want an interstellar probe.

Offline Propylox

  • Full Member
  • *
  • Posts: 100
  • Colorado
  • Liked: 8
  • Likes Given: 6
Re: Design a mission to Proxima b
« Reply #169 on: 08/01/2017 11:47 AM »
------------------- On Propulsion --------------------
Why stop at Proxima b? Consider the design requirements for any such mission;
1- A giant curtain array and a large optical telescope for observation
3- Nuclear power and a layered approach to propulsion

The curtain array is inherently an electric sail when leaving our solar system with the greatest thrust achieved near the Sun. There's also the possibility of riding a directed CME once some velocity and distance from the Sun has been built up.
Deep Oberth burn right by the Sun, leave at ~100-120km/s (perhaps with help from RTG electric propulsion).
A "layered approach" must begin with an energy source as massive as the Sun before any self-propulsion takes over.
Wayback threads are below, but information/discussion on electric-sails is minimal, especially the anticipated weight and thrust per m2. Are there better numbers for calculating the thrust of solar wind and CMEs to determine if increasing the size of the curtain array, or extending a temporary e-sail from it is worth it?
2008 - https://forum.nasaspaceflight.com/index.php?topic=12795.0
'11/'13 - https://forum.nasaspaceflight.com/index.php?topic=24118.0
'15/'16 - https://forum.nasaspaceflight.com/index.php?topic=38258.0
Page 61 of the 2017 House budget report, posted by yg1968
https://forum.nasaspaceflight.com/index.php?topic=39540.340
Quote
Interstellar propulsion research.—Current NASA propulsion investments include advancements in chemical, solar electric, and nuclear thermal propulsion ... The NASA Innovative Advanced Concepts (NIAC) program is currently funding concept studies of directed energy propulsion for wafer-sized spacecraft that in principle could achieve velocities exceeding 0.1c and an electric sail that intercepts solar wind protons. ...
« Last Edit: 08/01/2017 11:52 AM by Propylox »

Offline Propylox

  • Full Member
  • *
  • Posts: 100
  • Colorado
  • Liked: 8
  • Likes Given: 6
Re: Design a mission to Proxima b
« Reply #170 on: 08/02/2017 01:08 PM »
Electric sail follow-up;
I can't understand and/or disagree with the concept of rotating an electric sail to spread its wires as this cannot effectively translate force to the spacecraft. This is a basic applied physics problem which is why I'm struggling to understand how it could have been overlooked. Corrections? Facepalm and mockery of my obvious oversight?

ie; A cable is extended laterally by centrifugal force, intending to catch ions and impart thrust along the z-axis. If those ions impart 1N, deflecting the wire 10-degrees = Only 0.17N is applied to the z-axis and 0.98N laterally. That's very inefficient use of already minimal force. If the ions wane, radial momentum will start to return the wires to their extended state, but require additional rotational force to fully extend them to their initial state. As such, this architecture is gaining little thrust while requiring constant rotational thrust, all while cranking out electrons. Seems like the centrifugal extension doesn't enable, but nullifies this design.
« Last Edit: 08/02/2017 01:10 PM by Propylox »

Tags: