Author Topic: EM Drive Developments - related to space flight applications - Thread 2  (Read 3322172 times)

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
@ RODAL

Arrgh, Mondays !

Looked over my bleary weekend, noticed I was using diameters AGAIN !

Mode   Frequency (MHz)  Quality Factor, Q   Input Power (W)  Mean Thrust (μN)   Calculated w/o
                                                                                                                          dielectric
TE012     1880.4               22000                         2.6                55.4                   10.8
TM212   1932.6                 7320                        16.9                91.2                   38.5
TM212   1936.7               18100                        16.7                50.1                   93.5
TM212    1937.115             6726                       50                   66                    104.0

Anyway, shows it pays to rewrite everything in the same place !

....

Great !

In order to understand the above, (please correct me if I am wrong), you used in your formula the actual frequency and mode shapes that took place in the EM Drive experiment with the dielectric so in that sense you did calculate with the dielectric in a very restricted sense.


FYI

Cleanup and de-typo of the take on applying the Equivalence Principle.


The proposition that dispersion caused by an accelerating frame of reference implied an accelerating frame of reference caused by a dispersive cavity resonator. (to 1st order using massless, perfectly conducting cavity, no dielectric)


Starting with the expressions for the frequency of a cylindrical RF cavity:

f = (c/(2*Pi))*((X/R)^2+((p*Pi)/L)^2)^.5

For TM modes, X = X[sub m,n] = the n-th zero of the m-th Bessel function.
[1,1]=3.83, [0,1]=2.40, [0,2]=5.52 [1,2]=7.02, [2,1]=5.14, [2,2]=8.42, [1,3]=10.17, etc.

and for TE modes, X = X'[sub m,n] = the n-th zero of the derivative of the m-th Bessel function.
[0,1]=3.83, [1,1]=1.84, [2,1]=3.05, [0,2]=7.02, [1,2]=5.33, [1,3]=8.54, [0,3]=10.17, [2,2]=6.71, etc.

Rotate the dispersion relation of the cavity into Doppler frame to get the Doppler shifts, that is to say, look at the dispersion curve intersections of constant wave number instead of constant frequency.

df = (1/(2*f))*(c/(2*Pi))^2*X^2*((1/Rs^2)-(1/Rb^2))

and from there the expression for the acceleration g from:

g = (c^2/L)*(df/f) such that:

g = (c^2/(2*L*f^2))*(c/(2*Pi))^2*X^2*((1/Rs^2)-(1/Rb^2))

Using the "weight" of the photon in the accelerated frame from:

"W" = (h*f/c^2)*g =>  "W" = T = (h/L)*df

gives thrust per photon:

T = (h/(2*L*f))*(c/(2*pi))^2*X^2*((1/Rs^2)-(1/Rb^2))

If the number of photons is (P/hf)*(Q/2*pi) then:

NT = P*Q*(1/(4*pi*L*f^3))*(c/(2*pi))^2*X^2*((1/Rs^2)-(1/Rb^2))


This does fit (as far as I've gotten) the concept of a self-accelerating Dirac wavepacket (which does conserve momentum).

Slow goin', thanks for your patience.

Excellent!  Thank you for posting the complete equations.

One suggestion:  In the expression NT = P*Q*(1/(4*pi*L*f^3))*(c/(2*pi))^2*X^2*((1/Rs^2)-(1/Rb^2))

the speed of light in vacuum "c" appears in the numerator without being divided by the SquareRoot of the relative electric permittivity and relative magnetic permeability.

Since the relative electric permittivity of the dielectric is 2.3, this would decrease the values in the table by a factor of Sqrt[2.3]=1.52 if the whole cavity would be occupied by the dielectric.  Granted that only a portion of the truncated cone contains the dielectric, which will decrease the dividing factor, but any amount will reduce the effective value of c in the medium, giving lower thrust and hence values closer to the experimental measurements. 

For example, very roughly, assuming that 1/3 of the longitudinal length is occupied by the dielectric, and using the average as a medium with those average properties, Sqrt[(2.3*1/3)+1*(2/3)]=1.20, the thrust values would be reduced by a factor of 1.20, so for the most important test (the one in recently performed in vacuum, -the other experimental values may have been affected by thermal convection effects in the air and are therefore less reliable-), instead of 104 μN you would get 87 μN, which better compares with the experimental value of 66 μN.
Or, taking the highest thrust value (rather than the truncated upper mean of the experimental trace), (disregarding turn on transients) which is 78 μN (see trace below) one would get an excellent comparison with NotSoSureOfIt's dielectric-weighted prediction in vacuum of 87 μN:  barely 11% difference !

« Last Edit: 02/17/2015 02:10 pm by Rodal »

Offline Notsosureofit

  • Full Member
  • ****
  • Posts: 691
  • Liked: 747
  • Likes Given: 1729
@ RODAL

PS:  Got plenty of big vacuum chambers here, (couple 6' x 8' cylinders, etc.) no torsion balances left though (not put together anyway .. might have parts) and a big lack of time !

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
@ RODAL

PS:  Got plenty of big vacuum chambers here, (couple 6' x 8' cylinders, etc.) no torsion balances left though (not put together anyway .. might have parts) and a big lack of time !
That makes for such an attractive setup, that anyone would gladly trudge through 7 ft of snow !  :)
« Last Edit: 02/17/2015 02:22 pm by Rodal »

Offline Notsosureofit

  • Full Member
  • ****
  • Posts: 691
  • Liked: 747
  • Likes Given: 1729
@ RODAL

PS:  Got plenty of big vacuum chambers here, (couple 6' x 8' cylinders, etc.) no torsion balances left though (not put together anyway .. might have parts) and a big lack of time !
That makes for such an attractive setup, that anyone would gladly trudge through 7 ft of snow !  :)

Time and trudging are an absolute necessity for keeping the doors open on a small (15.5k sq ft, 4 people) R&D "skunkworks" like this.  Been running at a loss lately.

But, back to emdrive spaceflight ...
« Last Edit: 02/17/2015 02:32 pm by Notsosureofit »

Offline SWGlassPit

  • I break space hardware
  • Full Member
  • ****
  • Posts: 845
  • Liked: 893
  • Likes Given: 142
...

As for whether or not meep integrates the equations, I think it is a matter of terminology...
Numerical Integration involves approximating definite integrals by  summing discretized areas.  This is not what the Finite Difference method does. 

Sorry about being perhaps overly rigorous in the use of the word "integrate". I point this (finite? pun-intended  :) ) difference because it is important to understand the convergence problems with Finite Difference methods (as opposed to integration methods like the Boundary Element method, for example, or methods based on variational principles like the Finite Element Method).
What the Finite Difference method does is instead to approximate solutions to differential equations using finite differences to approximate derivatives. 
The idea of a finite difference method is the transformation of a continuity domain to a discrete set of points, the mesh. In every grid point the given differential operator is approximated by a difference-operator.

The issue is that numerically, numerical differentiation is always a much trickier problem than numerical integration (from a convergence viewpoint).

The Finite Difference method is a very old method (references going back to the 19th century) but great progress was made using it during and after World War II, due to the development of the digital computer, due to Von Neuman and Friederichs, mainly due to the Manhattan Project.

At MIT's ASRL very complex Finite Difference codes were developed, for example the PETROS code:
http://bit.ly/1AJ5Vgt in addition to Finite Element and other types of numerical analyses.

To expand on this, Finite Difference schemes suffer from a deep, inherent weakness that is not true of Finite Element methods: the solution spaces are vastly different.

In Finite Difference schemes, the solution is defined in terms of the function value (i.e., a plain old number) at a finite number of discrete points -- there is no mathematical basis to define the correct interpolation between points.  In Finite Element methods, the solution space is a collection of functions, defined across the entire domain.  Aside from providing a mathematical basis for interpolation, having a solution space as a collection of functions allows you to estimate the approximation error in the solution (in fact, and this is outside the scope of this thread, provided it is properly applied, Finite Element method has the "best approximation" property, which means that the method will give you the least possible error for the discretization you supply).  This is most easily done by taking the L2-norm (in no way related to NSF.com's L2  ;) ) of the residual that results from plugging the solution back into the original differential equation.  To be specific, (using LaTeX notation), if

A \phi - f = 0,

where A is the differential operator, \phi is the (infinite-dimensional) solution vector, and f is the forcing function (or load vector), then the Finite Element method gives a finite-dimensional approximation \phi_h, which gives the equation:

A \phi_h - f = e,

where e is the residual, a (generally non-zero) function that represents the error that results from substituting the approximation into the differential equation.  By squaring this value and integrating it across the problem domain (which gives the square of the L2-norm), we have a reliable measure of how good our answer is -- it is strictly non-negative, a lower number represents a better approximation, and it is only zero if our answer is exact.

To-wit: Finite Element methods give you both an approximation to the exact solution and a measure of how good that approximation is, even if the exact solution is not easily obtained.  Finite Difference schemes provide no such information regarding the correctness of the results they provide.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
....
To-wit: Finite Element methods give you both an approximation to the exact solution and a measure of how good that approximation is, even if the exact solution is not easily obtained.  Finite Difference schemes provide no such information regarding the correctness of the results they provide.
That's why it is critical, to assess results of a numerical solution, to compare the results of a numerical solution (for example Finite Difference method) to an exact solution.  In this case, an exact solution to a cylindrical cavity exists  (http://en.wikipedia.org/wiki/Microwave_cavity#Cylindrical_cavity), and it would be worthwhile to compare how far is the MEEP solution for a resonant cylindrical cavity, say of a diameter=Sqrt[BigDiameterOfTruncatedCone * SmallDiameterOfTruncatedCone] and same axial length, with the same material inputs and mesh as used for the Finite Difference solution of the Truncated Cone.

An exact solution to a truncated cone also exists, but it is much more complicated to solve (as it involves the solution of two eigenvalue problems) than the cylindrical cavity.

If the MEEP Finite Difference solution (with the same mesh, inputs and dimensions, as discussed above) cannot match the exact solution of a cylindrical cavity, how can one be confident of the solution for a more complicated problem for which there is no exact solution to compare with?

Another issue is using a flat two-dimensional simulation rather than a three-dimensional simulation, because of the huge run-times involved in a 3-D simulation.  This involves a very severe assumption that a 2-D model is sufficient, and comparison with the exact solution to a cylindrical cavity (http://en.wikipedia.org/wiki/Microwave_cavity#Cylindrical_cavity) is therefore of paramount importance.

This is only meant as a constructive suggestion. 

The very time-consuming and dedicated work of @aero deserves very strong praise, as having shown that evanescent waves is a possible explanation for the EM Drive results.  We sincerely hope that he continues with it.
« Last Edit: 02/17/2015 03:54 pm by Rodal »

Offline birchoff

  • Full Member
  • **
  • Posts: 273
  • United States
  • Liked: 125
  • Likes Given: 95
Has anyone looked at rangling some cloud vm time to run these processes.

You can find the amazon compute vm prices here
http://aws.amazon.com/ec2/pricing/

and the azure compute vm prices here
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/

not sure what the runtime looks like but I would hazard a guess that you could figure out a cheap enough solution that allows you to get results in the least amount of time.



Offline aero

  • Senior Member
  • *****
  • Posts: 3629
  • 92129
  • Liked: 1146
  • Likes Given: 360
Has anyone looked at rangling some cloud vm time to run these processes.

You can find the amazon compute vm prices here
http://aws.amazon.com/ec2/pricing/

and the azure compute vm prices here
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/

not sure what the runtime looks like but I would hazard a guess that you could figure out a cheap enough solution that allows you to get results in the least amount of time.

I've looked at it but I'm not going to take the responsibility of paying for and trying to figure out how to use their systems, and to install and use meep on those systems.

I do have an opinion. The most understandable documentation of the available capabilities is from Google.
https://cloud.google.com/compute/pricing

But at only 100 GB memory for a high memory compute configuration, I'd be concerned about size of the model. For a 3D model, Meep memory requirements go up by a factor of 8 for each doubling of the resolution and compute requirements by a factor of 16 for the same doubling.

If someone wanted to do this, it would be necessary to establish the model at low resolution on a convenient machine, then calculate the resources needed by the problem running at the resolution required for viable results.

Meep was designed to run massive problems at high resolution on supercomputers. Sixteen processors each with 6.5 GB ram is not really a very impressive supercomputer. And I wonder, can these cloud based compute engines guarantee model execution synchronization for the duration of a run that may consume hours of CPU? If it can't be synchronized then AIUI all the CPU's wait for the slowest partition to keep up. That could get costly in a hurry.

I have designed and priced a custom computer that could provide a very good basis for estimating the resources needed to run high fidelity problems. It was priced at $2038.96 (USD), a firm quote, tax included. It is about 1/3 the machine referred to above. (Six cores with 32 GB DDR4 memory) I'm not going to take the responsibility for paying for that machine, either, though I would love to have it.
Retired, working interesting problems

Offline birchoff

  • Full Member
  • **
  • Posts: 273
  • United States
  • Liked: 125
  • Likes Given: 95
Has anyone looked at rangling some cloud vm time to run these processes.

You can find the amazon compute vm prices here
http://aws.amazon.com/ec2/pricing/

and the azure compute vm prices here
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/

not sure what the runtime looks like but I would hazard a guess that you could figure out a cheap enough solution that allows you to get results in the least amount of time.

I've looked at it but I'm not going to take the responsibility of paying for and trying to figure out how to use their systems, and to install and use meep on those systems.

I do have an opinion. The most understandable documentation of the available capabilities is from Google.
https://cloud.google.com/compute/pricing

But at only 100 GB memory for a high memory compute configuration, I'd be concerned about size of the model. For a 3D model, Meep memory requirements go up by a factor of 8 for each doubling of the resolution and compute requirements by a factor of 16 for the same doubling.

If someone wanted to do this, it would be necessary to establish the model at low resolution on a convenient machine, then calculate the resources needed by the problem running at the resolution required for viable results.

Meep was designed to run massive problems at high resolution on supercomputers. Sixteen processors each with 6.5 GB ram is not really a very impressive supercomputer. And I wonder, can these cloud based compute engines guarantee model execution synchronization for the duration of a run that may consume hours of CPU? If it can't be synchronized then AIUI all the CPU's wait for the slowest partition to keep up. That could get costly in a hurry.

I have designed and priced a custom computer that could provide a very good basis for estimating the resources needed to run high fidelity problems. It was priced at $2038.96 (USD), a firm quote, tax included. It is about 1/3 the machine referred to above. (Six cores with 32 GB DDR4 memory) I'm not going to take the responsibility for paying for that machine, either, though I would love to have it.

Nice to know someone has looked at it. Was just wondering if these computation limitations could be simply solved by the application of a little sprinkle of the cloud. As for paying for access to the resources, I guess the question is how badly do we want accurate results.

Offline Mulletron

  • Full Member
  • ****
  • Posts: 1150
  • Liked: 837
  • Likes Given: 1071
https://iafastro.directory/iac/archive/browse/IAC-13/C4/P/16863/

Quote
The EmDrive's resonant cavity has the characteristics as of cutoff  waveguide. By reference to the phenomena of electromagnetic wave anomalous propagation  in the cutoff waveguide, the fact that the  electromagnetic wave can be reflected without metal  surface in the cutoff waveguide is presented in the paper.At the same time, another fact that the electromagnetic wave distribution in the EmDrive's resonant cavity showing a characteristic of evanescent wave  is presented also. It is  a kind of electromagnetic wave anomalous propagation. This anomalous propagation can be described by the photon tunneling effect, consistent with quantum field theory.At last,the opinion that EmDrive revealing some properties of background vacuum is put forward in the paper,and the introduction of the virtual photon process may be a new method to analyze the momentum conservation of EmDrive.

He's right you know, you really don't even need the small diameter end plate. As the tapered frustum diameter reduces to cutoff, it imposes a natural boundary beyond which modes that can't resonate will go evanescent. Standing waves will still appear within the cavity. Just like how reflected power appears in a cut waveguide. Might even be useful feature. Food for thought at least. I only have the abstract.

So this approach described above certainly would apply to a waveguide excited by a wideband RF signal, such as one would get from a magnetron. For a narrowband CW excitation like at Eagleworks, there wouldn't be generation of evanescent modes in the manner described by the Chinese.

The fact that Eagleworks is reporting that in TE modes in particular, there are numerous resonant modes in very close proximity to each other should be taken to heart (which makes sense given the continuously changing E field boundary condition), as this may very well be a useful feature for Q-thrusters.

I think from the research presented in this forum, we've managed to suss out a testable framework of how best to implement Q thrusters driven by RF.
-TE modes are important. In particular because that places the magnetic field longitudinally. Is TE012 the best? The Chinese, Shawyer, and a hint from Eagleworks suggests yes it is. (Technical problems aside like maintaining resonance, ways to overcome have been suggested.)
-I personally think that exciting as many TE modes as possible simultaneously will yield best results.
-Driving the cavity with a wideband signal is important.
-The above serves to maximize field localization and intensity within the resonant cavity.*
-The final is choosing materials which are strongly magnetoelectric and/or magnetochiral.**
* http://adsabs.harvard.edu/abs/2014NatPh..10..394P 
** https://vtechworks.lib.vt.edu/bitstream/handle/10919/25258/1.2337996.pdf?sequence=1


There are still a lot of questions that need answering as to why the Chinese are reporting much better performance from using magnetrons, and this kind of thinking my provide an answer, at least in part.

There's lot of stuff out there about evanescent tunneling in undersized waveguides. I have no idea what this has to do with thrust, might lead to another unknown interaction or a new way of describing the already familiar.
http://www.popularscience.co.uk/features/feat11.htm
http://www.hindawi.com/journals/ijo/2013/947068/
http://www.ifac.cnr.it/toq-www/guide1-eng.htm
http://goo.gl/qlYrjn

I think @Aero will be pleased.  ;)
« Last Edit: 02/17/2015 08:06 pm by Mulletron »
And I can feel the change in the wind right now - Rod Stewart

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
https://iafastro.directory/iac/archive/browse/IAC-13/C4/P/16863/

...
Great article, Mulletron, thank you for bringing it to our attention.

I only found the abstract (sorry if I missed where the actual paper is linked). Is there a link to the actual paper ?  (If the actual paper is in Chinese (language) that's fine as well)

PS: making great progress on the exact solution for the truncated cone.

EDIT: for the NASA cavity dimension some modes are purely resonant, some modes are purely evanescent and some modes go from resonant to evanescent.
« Last Edit: 02/17/2015 08:03 pm by Rodal »

Offline Mulletron

  • Full Member
  • ****
  • Posts: 1150
  • Liked: 837
  • Likes Given: 1071
It says you have to email him and ask for it.
And I can feel the change in the wind right now - Rod Stewart

Offline tchernik

  • Full Member
  • **
  • Posts: 274
  • Liked: 315
  • Likes Given: 641

Nice to know someone has looked at it. Was just wondering if these computation limitations could be simply solved by the application of a little sprinkle of the cloud. As for paying for access to the resources, I guess the question is how badly do we want accurate results.

If the cost of the simulation approaches that of an actual experiment, you'd better off doing the experiment!

Anyone has any idea of how much would it cost to make a decent DIY replication?

Offline aero

  • Senior Member
  • *****
  • Posts: 3629
  • 92129
  • Liked: 1146
  • Likes Given: 360
Has anyone looked at rangling some cloud vm time to run these processes.

You can find the amazon compute vm prices here
http://aws.amazon.com/ec2/pricing/

and the azure compute vm prices here
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/

not sure what the runtime looks like but I would hazard a guess that you could figure out a cheap enough solution that allows you to get results in the least amount of time.

I've looked at it but I'm not going to take the responsibility of paying for and trying to figure out how to use their systems, and to install and use meep on those systems.

I do have an opinion. The most understandable documentation of the available capabilities is from Google.
https://cloud.google.com/compute/pricing

But at only 100 GB memory for a high memory compute configuration, I'd be concerned about size of the model. For a 3D model, Meep memory requirements go up by a factor of 8 for each doubling of the resolution and compute requirements by a factor of 16 for the same doubling.

If someone wanted to do this, it would be necessary to establish the model at low resolution on a convenient machine, then calculate the resources needed by the problem running at the resolution required for viable results.

Meep was designed to run massive problems at high resolution on supercomputers. Sixteen processors each with 6.5 GB ram is not really a very impressive supercomputer. And I wonder, can these cloud based compute engines guarantee model execution synchronization for the duration of a run that may consume hours of CPU? If it can't be synchronized then AIUI all the CPU's wait for the slowest partition to keep up. That could get costly in a hurry.

I have designed and priced a custom computer that could provide a very good basis for estimating the resources needed to run high fidelity problems. It was priced at $2038.96 (USD), a firm quote, tax included. It is about 1/3 the machine referred to above. (Six cores with 32 GB DDR4 memory) I'm not going to take the responsibility for paying for that machine, either, though I would love to have it.

Nice to know someone has looked at it. Was just wondering if these computation limitations could be simply solved by the application of a little sprinkle of the cloud. As for paying for access to the resources, I guess the question is how badly do we want accurate results.

Not so much accurate results. The results I have presented are accurate to second order, for the problem evaluated. More like more representative problems, higher fidelity models (3D, and resolving smaller gaps, for example), that could be achieved with more computing power.

I have calculated error bounds, the magnitude of step size squared for my current run, which I hope to post in about 6 hours.

delta s ~= 0.023 mm = 0.000023 meters, (delta s)^2 =5.29E-010
delta t ~= 0.0001071422 sec, (delta t)^2 = 1.1479455161248E-008 s^2

Those error bounds seem acceptable to me for the problems I am evaluating. The errors inherent in the problem formulated to represent characteristics of the actual EM thurster/vacuum chamber are far, far larger than the computational errors of the simulation.

I kind of wish people would stop posting their concern about numerical computational deficiencies here. If they have a valid concern then they really should take it up with MIT, where the codes were written, or perhaps ONR, DARPA who spent the money to pay for the development. Maybe ONR/DARPA should ask for their money back?

Or PM me, I can point them to several online sites giving results that scrutinise meep accuracy.



Retired, working interesting problems

Offline birchoff

  • Full Member
  • **
  • Posts: 273
  • United States
  • Liked: 125
  • Likes Given: 95
Has anyone looked at rangling some cloud vm time to run these processes.

You can find the amazon compute vm prices here
http://aws.amazon.com/ec2/pricing/

and the azure compute vm prices here
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/

not sure what the runtime looks like but I would hazard a guess that you could figure out a cheap enough solution that allows you to get results in the least amount of time.

I've looked at it but I'm not going to take the responsibility of paying for and trying to figure out how to use their systems, and to install and use meep on those systems.

I do have an opinion. The most understandable documentation of the available capabilities is from Google.
https://cloud.google.com/compute/pricing

But at only 100 GB memory for a high memory compute configuration, I'd be concerned about size of the model. For a 3D model, Meep memory requirements go up by a factor of 8 for each doubling of the resolution and compute requirements by a factor of 16 for the same doubling.

If someone wanted to do this, it would be necessary to establish the model at low resolution on a convenient machine, then calculate the resources needed by the problem running at the resolution required for viable results.

Meep was designed to run massive problems at high resolution on supercomputers. Sixteen processors each with 6.5 GB ram is not really a very impressive supercomputer. And I wonder, can these cloud based compute engines guarantee model execution synchronization for the duration of a run that may consume hours of CPU? If it can't be synchronized then AIUI all the CPU's wait for the slowest partition to keep up. That could get costly in a hurry.

I have designed and priced a custom computer that could provide a very good basis for estimating the resources needed to run high fidelity problems. It was priced at $2038.96 (USD), a firm quote, tax included. It is about 1/3 the machine referred to above. (Six cores with 32 GB DDR4 memory) I'm not going to take the responsibility for paying for that machine, either, though I would love to have it.

Nice to know someone has looked at it. Was just wondering if these computation limitations could be simply solved by the application of a little sprinkle of the cloud. As for paying for access to the resources, I guess the question is how badly do we want accurate results.

Not so much accurate results. The results I have presented are accurate to second order, for the problem evaluated. More like more representative problems, higher fidelity models (3D, and resolving smaller gaps, for example), that could be achieved with more computing power.

I have calculated error bounds, the magnitude of step size squared for my current run, which I hope to post in about 6 hours.

delta s ~= 0.023 mm = 0.000023 meters, (delta s)^2 =5.29E-010
delta t ~= 0.0001071422 sec, (delta t)^2 = 1.1479455161248E-008 s^2

Those error bounds seem acceptable to me for the problems I am evaluating. The errors inherent in the problem formulated to represent characteristics of the actual EM thurster/vacuum chamber are far, far larger than the computational errors of the simulation.

I kind of wish people would stop posting their concern about numerical computational deficiencies here. If they have a valid concern then they really should take it up with MIT, where the codes were written, or perhaps ONR, DARPA who spent the money to pay for the development. Maybe ONR/DARPA should ask for their money back?

Or PM me, I can point them to several online sites giving results that scrutinise meep accuracy.

I wasn't criticizing numerical accuracy of the work your doing. you are correct to my usage of the term accurate. I just noticed the remark on lack of computational resources for certain levels of resolution and thought I would make a suggestion.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
...Actually the basis of the exact solution (for the truncated cone microwave cavity) goes back to the great US engineer Schelnukoff in 1938 .   ...

More interesting information about the great Russian/American scientist/engineer Schelkunoff, who made it possible to obtain an exact solution for the  truncated cone (frustum) microwave cavity (the geometry of the EM Drive that is being tested and proposed by NASA, Shawyer in the UK and Juan Yang in China, for space flight applications):

http://en.wikipedia.org/wiki/Sergei_Alexander_Schelkunoff

His book about Electromagnetic Waves published in 1943 in the middle of WWII has higher quality information than many contemporary books.

Quote
He crossed Siberia into Manchuria and then Japan before settling into Seattle in 1921. There he received bachelor's and master's degrees in mathematics from the State College of Washington, now the University of Washington, and in 1928 received his Ph.D. from Columbia University for his dissertation On Certain Properties of the Metrical and Generalized Metrical Groups in Linear Spaces of n Dimension.
....

, Schelkunoff joined Western Electric's research wing, which became Bell Laboratories. In 1933 he and Sally P. Mead began analysis of waveguide propagation discovered analytically by their colleague George C. Southworth. Their analysis uncovered the transverse modes. Schelkunoff appears to have been the first to notice the important practical consequences of the fact that attenuation in the TE01 mode decays inversely with the 3/2 power of the frequency. In 1935 he and his colleagues reported that coaxial cable, then new, could transmit television pictures or up to 200 telephone conversations.



Schelkunoff is officially listed as being a member of Bell Labs Mathematical Center from 1929 to 1963:
http://cm.bell-labs.com/cm/ms/center/frmdir.html
« Last Edit: 02/18/2015 02:11 am by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
...

I have calculated error bounds, the magnitude of step size squared for my current run, which I hope to post in about 6 hours.

delta s ~= 0.023 mm = 0.000023 meters, (delta s)^2 =5.29E-010
delta t ~= 0.0001071422 sec, (delta t)^2 = 1.1479455161248E-008 s^2

Those error bounds seem acceptable to me for the problems I am evaluating. The errors inherent in the problem formulated to represent characteristics of the actual EM thurster/vacuum chamber are far, far larger than the computational errors of the simulation.

I kind of wish people would stop posting their concern about numerical computational deficiencies here. If they have a valid concern then they really should take it up with MIT, where the codes were written, or perhaps ONR, DARPA who spent the money to pay for the development. Maybe ONR/DARPA should ask for their money back?

...
As discussed here: http://forum.nasaspaceflight.com/index.php?topic=36313.msg1332799#msg1332799, those error bounds do not tell you about what can be the difference between a Finite Difference solution and an exact solution to the problem.  Magnitude of step size is instead related, for example, to the stability problems of the Finite Difference operator, as it was shown by Friedrichs and Lax.  For example, the central-difference operator has stability problems that mandate the time step to be smaller than a certain bound, because it is an explicit (as opposed to implicit) finite difference operator.  However, having a step small enough to avoid instability of the finite difference operator does not tell you about how far can the finite difference solution be from an exact solution.

The accuracy issues we are discussing are not related to any bugs or issues concerning the people at MIT that wrote the program, they are issues inherent to the Finite Difference method.  All numerical methods have numerical issues of different kinds. 

The purpose to discuss these issues here in an open forum is to examine the numerical solutions concerning EM Drive for space flight applications, just like we examine the experiments and the theoretical explanations.   :)

We are discussing the NASA experiments, and the proposed theoretical explanations, asking and examining all kinds of questions.  Numerical solutions (related to EM Drive Developments - space flight applications) deserve equal examination, not less, than the examination of experiments and the examination of theoretical explanations.

I fully understand that such examination can get frustrating. We are all interested in finding how the EM Drive generates thrust in experiments, and our only goal is to enable space flight applications as soon as possible.

« Last Edit: 02/17/2015 09:11 pm by Rodal »

Offline aero

  • Senior Member
  • *****
  • Posts: 3629
  • 92129
  • Liked: 1146
  • Likes Given: 360
Quote
We are discussing the NASA experiments, and the proposed theoretical explanations, asking and examining all kinds of questions.  Numerical solutions (related to EM Drive Developments - space flight applications) deserve equal examination, not less, than the examination of experiments and the examination of theoretical explanations.

So what do you expect me to do about it? My 4 poor little processors have been sprinting at 99.7 to 100% capacity since I awoke this morning and started this current run. They won't finish for another 2-3 hours and are very tired. I can not increase the resolution for this run to study convergence. I could cut the resolution but I doubt that would calculate an answer.

What I plan to do is reduce the lattice size after this run (which is checking the forces without the vacuum chamber - lattice size, resolution and all else remaining the same). Once I reduce the lattice size I should be able to run higher resolution to check convergence. Maybe even generate a 3D image of the fields but that is problematic for any gap sizes remotely representative of the Eagleworks test article.

So what do you propose as a resolution to your critique?
« Last Edit: 02/17/2015 09:52 pm by aero »
Retired, working interesting problems

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
...
So what do you propose as a resolution to your critique?
Same thing I proposed here:

http://forum.nasaspaceflight.com/index.php?topic=36313.msg1332813#msg1332813

.. it is critical, to assess results of a numerical solution, to compare the results of a numerical solution (for example Finite Difference method) to an exact solution.  In this case, an exact solution to a cylindrical cavity exists  (http://en.wikipedia.org/wiki/Microwave_cavity#Cylindrical_cavity), and it would be worthwhile to compare how far is the MEEP solution for a resonant cylindrical cavity, say of a diameter=Sqrt[BigDiameterOfTruncatedCone * SmallDiameterOfTruncatedCone]=0.21060 m and same axial length, with the same material inputs and mesh as used for the Finite Difference solution of the Truncated Cone.
....

I propose a MEEP analysis for a resonant cylindrical cavity (no dielectric), with  diameter=Sqrt[BigDiameterOfTruncatedCone * SmallDiameterOfTruncatedCone] and same axial length=0.2286 m as the NASA cavity, with the same material inputs and mesh as you used for the Finite Difference solution of the Truncated Cone. 


Actual geometry
Large OD : 11.00 " (0.2794m),
Small OD: 6.25" (0.1588 m)
Length : 9.00 " (0.2286m)
Geometric Mean Diameter: 0.2106056741875679 m


If the MEEP mesh for the truncated cone cannot be used to obtain a  MEEP solution close to the exact solution for a cylindrical cavity of similar dimensions (an easier problem to solve than the truncated cone), then that mesh and solution (2D?) cannot get a reliable solution for the EM Drive truncated cone, concerning EM Drive for space flight applications. (The cylindrical cavity is an easier problem because the mode shapes are either purely resonating (real solutions) or evanescent (imaginary solutions) while for a truncated cone there are modes that go from resonating to evanescent, and because the truncated cone displays interesting attenuation and focusing properties).
« Last Edit: 02/17/2015 10:23 pm by Rodal »

Offline SWGlassPit

  • I break space hardware
  • Full Member
  • ****
  • Posts: 845
  • Liked: 893
  • Likes Given: 142

The accuracy issues we are discussing are not related to any bugs or issues concerning the people at MIT that wrote the program, they are issues inherent to the Finite Difference method.  All numerical methods have numerical issues of different kinds. 

The purpose to discuss these issues here in an open forum is to examine the numerical solutions concerning EM Drive for space flight applications, just like we examine the experiments and the theoretical explanations.   :)

We are discussing the NASA experiments, and the proposed theoretical explanations, asking and examining all kinds of questions.  Numerical solutions (related to EM Drive Developments - space flight applications) deserve equal examination, not less, than the examination of experiments and the examination of theoretical explanations.


I wanted to highlight and expand on this --

Finite Difference schemes have the weakness that I described previously -- so why do people use them at all?  They are cheap and easy to implement.  Finite Element methods, for all their glory, are quite expensive in comparison, especially when the differential operator is non-self-adjoint or nonlinear (for things like fluid mechanics, they can be hideously expensive). 

The point of this discussion is not to say that you are wrong to use a particular method.  The point is that all numerical methods have flaws and drawbacks.  Intelligent use of them requires knowledge and understanding of these problems to develop strategies to mitigate them.

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1