Author Topic: Resonant Cavity Space-Propulsion: institutional experiments and theory  (Read 57935 times)

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
This is a thread focused on objective analysis of institutional experiments and theories concerning resonant cavity space-propulsion, and discussing possible space applications.  The emphasis is on institutional experiments: those at Universities and R&D institutions (whether federal R&D as in NASA, or R&D by private companies) as opposed to Do-It-Yourself experiments.  The term "EM Drive" is associated with Roger Shawyer's UK-patented concept (http://emdrive.com/) and it has its own devoted NSF threads (notice, though that the name EMDrive is currently trademarked in the US and in Europe for industrial applications totally unrelated to Shawyer's patent).  The emphasis on this thread is on non-Shawyer experiments and theories. 

Objective skeptical inquiry is welcomed.   Disagreements should be expressed politely, focusing on the technical, engineering and scientific aspects.   As such, the use of experimental data, mathematics, physics, engineering, drawings, spreadsheets and computer simulations are strongly encouraged, while subjective wordy statements are discouraged.  This link

http://math.typeit.org/

enables typing of mathematical symbols, including differentiation and integration, Greek letters, etc.

Links to information from reputable journals and sources are strongly encouraged.  Please acknowledge the authors and respect copyrights. 

Conspiracy discussions are discouraged.  Commercial advertisement and promotion is prohibited.   X-Prize, venture capital and public research grant funding discussions are ok.

In order to minimize bandwidth and  maximize information content, when quoting, one can use an ellipsis (...) to indicate the clipped material.

Only use the embed [img ]http://code when the image is small enough to fit within the page. Anything wider than the width of the page makes the page unreadable as it stretches it.
« Last Edit: 01/11/2016 03:37 AM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Welcome everybody !   :)

There are many possible ways to start this thread.  For example

One possible discussion is the effect on experimental results of conducting experiments in partial vacuum: thermal effects as well as a discussion of any theoretical effects.

Numerical modeling (Meep, COMSOL, SuperFish, ANSYS, etc.), accuracy and relevance vs. experiments is another possible topic of discussion.

A discussion of whether and how statistical methods to analyze these experiments can be made relevant is another possible topic.

What is the best design of an experiment to minimize thermal effects, Lorentz forces and other effects such that a conclusive, convincing, statement can be made regarding these institutional experiments?

What theories could allow a resonant-cavity to actually result in useful spaceflight propulsion?  How would conservation of energy be addressed?

Has Dr. White and his group published any more papers since http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150006842.pdf or given any presentations since this one at NASA Ames?:



with particular attention to this question from a NASA Ames scientist:



Are NASA's proposed experiments at NASA Glenn going to take place?

Paul March's discussion of the cylindrical resonant cavity with RF chokes experiment at NASA using an interferometer, and the discussions we had with user StrongGR and his paper (http://arxiv.org/abs/1505.06917v1 ) analyzing it...





Countless topics are open for discussion and still unsettled!
« Last Edit: 01/04/2016 05:18 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
IMPORTANCE OF DIELECTRIC INSERT IN NASA'S REPORTED RESULTS

Can the dielectric insert in NASA's tests be responsible for higher results in NASA's experiments than in Tajmar's experiments at TU Dresden (http://bit.ly/1mylt2q and http://emdrive.wiki/Experimental_Results)?  Can the dielectric insert contribute to real thrust in what Dr. White at NASA has called "Q-thrusters" or can it contribute to experimental artifacts?  Prior discussions with Notsosureofit ( http://emdrive.wiki/@notsosureofit_Hypothesis ) indicated that his theory would show thrust even in a cavity with uniform cylindrical cross-section as long as the dielectric insert was inserted asymmetrically.  Prof. Woodward has been said to be of the opinion that if NASA's experiments show real thrust useful for spaceflight that it must be a result of the Mach effect due to the dielectric insert.

It has been questioned (by others at another NSF thread) whether NASA's reported test without a dielectric insert was in resonance.

The explicit reference to NASA's reported result without a dielectric insert is the following:

page 18 of
 Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum
David A. Brady, Harold G. White, Paul March, James T. Lawrence, and Frank J. Davies
July 28-30, 2014, Cleveland, OH
AIAA 2014-4029 Propulsion and Energy Forum
(This material was declared a work of the U.S. Government and therefore is not subject to copyright protection in the United States:
http://www.libertariannews.org/wp-content/uploads/2014/07/AnomalousThrustProductionFromanRFTestDevice-BradyEtAl.pdf
)

Quote
We performed some very early evaluations without the dielectric resonator (TE012 mode at 2168 MHz, with power levels up to ~30 watts) and measured no significant net thrust.

This is a very significant statement, as NASA reports to have measured no thrust without a dielectric insert: this NASA reported result therefore runs contrary to the claims of R. Shawyer regarding his "EM Drive" (see tabulated claimed results without a dielectric insert in http://emdrive.wiki/Experimental_Results for his "Demonstrator" and "Boeing Flight Thruster") as well as the claimed results by Yang (http://emdrive.wiki/Experimental_Results).  (It has been recently reported that China's academician's terminated Yang's resonant cavity propulsion project in 2014. )   

Up to now, reported "anomalous thrust" data by NASA has been with resonant cavities containing a dielectric insert asymmetrically placed inside the cavity.

So, was the NASA test without a dielectric insert in resonance?

Paul March provided a valuable NASA report by Frank Davis (one of the co-authors of the 2014 NASA report on the "Anomalous Thrust Production" paper linked above) [attached below as a PDF file "Frustum modes overview"], using COMSOL Finite Element Analysis that calculates the natural frequency (for mode shape TE012) without a dielectric insert to be very close to the frequency at which the test (without a dielectric insert) was reported by NASA:

measured frequency at which test was performed:       2.168 GHz
calculated natural frequency (COMSOL FEA analysis):   2.179 GHz
difference: (2.179 - 2.168)/2.168 = 0.5%

Therefore the evidence supports that the NASA test without a dielectric was indeed in resonance: as the measured frequency was extremely close to the calculated natural frequency (only 0.5% difference which is easily explainable by minor differences in dimensions between the actual tested piece and the calculated dimensions).


Furthermore, Paul March stated that they had S11 and S21 measurements during these tests and that testing was performed when the cavity was shown to be in resonance as per NASA measurements.

« Last Edit: 01/04/2016 05:35 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
EVIDENCE THAT NASA'S TEST WITHOUT A DIELECTRIC INSERT WAS PERFORMED AT A RESONANT NATURAL FREQUENCY OF THE CAVITY


For thoroughness, and to further address (and hopefully "put to bed") the hypothesis advanced by others "that NASA's test without a dielectric insert was not in resonance", we calculate the natural frequency of mode shape TE012 for NASA's dimensions.

We use an exact solution of Maxwell's equations for standing-wave resonance of a truncated cone, I obtained using Wolfram Mathematica.  The solution uses spherical Bessel functions and associated Legendre functions (as per Wolfram Mathematica definitions) and it also uses an intrinsic system of embedded spherical coordinates for the frustum of a cone.  The solution is similar to Greg Egan's solution (http://gregegan.customer.netspace.net.au/SCIENCE/Cavity/Cavity.html) except in its generality: the solution of Maxwell's equations obtained using Wolfram Mathematica can calculate mode shapes for arbitrarily large quantum numbers m,n,p (while Egan's as presented was restricted to low order).  I have compared my solution (using Mathematica) to the examples shown by Egan, and the comparison is excellent.

NASA's frustum of a cone dimensions

(as given by Paul March in a post as Star-Drive in thread 2 of the NSF EM Drive thread, see: https://forum.nasaspaceflight.com/index.php?topic=36313.msg1326997#msg1326997  )

Quote from: Paul March NASA
The copper frustum we built and now are using has the following internal copper surface dimensions.
Large OD : 11.00 " (0.2794m)
Small OD: 6.25" (0.1588 m) &
Length : 9.00 " (0.2286m) 

and as given by Frank Davies (NASA/JSC/EP5) (see Frank Davis document https://forum.nasaspaceflight.com/index.php?action=dlattach;topic=39214.0;attach=1091650 )

Quote from:  Frank Davies NASA
Bottom diam.: 11.01 inch (279.7 mm)
Top diam.: 6.25 inch (0.1588 mm)
Height: 9.00 inch (228.6 mm)
Material: 101 Copper Alloy

The only difference being 11.01 inches for the large end as given by Frank Davis vs. 11.00 inches given by Paul March.  I will use the dimensions given by Frank Davis, for proper comparison to his COMSOL FEA analysis:


I used the following input parameters, as used by Frank Davis:

bigDiameter = (11.01 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (6.25 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (9 inch)*(2.54 cm/inch)*(1 m/(100 cm));

tanHalfAngleCone = (bigDiameter - smallDiameter)/(2*axialLength);

halfAngleConeRadians = ArcTan[tanHalfAngleCone];

halfAngleConeDegrees = (180/Pi)*halfAngleConeRadians;

r2 = Mean[{axialLength /(1 - ( smallDiameter /bigDiameter)), bigDiameter/(2*Sin[halfAngleConeRadians])}];

r1 = Mean[{axialLength /(( bigDiameter/ smallDiameter) - 1), smallDiameter/(2*Sin[halfAngleConeRadians])}];

Notice that, since the exact solution assumes spherical ends, while NASA's truncated cone has flat ends, the spherical radii r1 and r2 are calculated as the mean value of the radii to a) the intersection of the ends with the lateral conical walls and b) the top of the dome.  From analysis of the problem and verification using numerical analysis (comparison with COMSOL FEA solutions for a large number of examples) I have found out that this mean value is an excellent approximation to the solution of Maxwell's equations for a truncated cone with flat ends.

These input parameters result in the following values (in SI units) for the spherical radii and the cone half angle:

r1 = 0.305316 m

r2 = 0.537845 m

halfAngleConeDegrees = 14.8125 degrees





COMPARISON OF SOLUTIONS WITH NASA's experiment

measured frequency at which NASA test was performed:                                              2.168 GHz
calculated natural frequency (exact solution, Dr. Rodal using Wolfram Mathematica):2.165 GHz
calculated natural frequency (COMSOL FEA analysis by Frank Davis at NASA):            2.179 GHz

difference between NASA's COMSOL FEA and measurement: (2.179 - 2.168)/2.168 = 0.5%

difference between exact solution and measurement: (2.165 - 2.168)/2.168 = -0.1%

Therefore the evidence supports that the NASA test without a dielectric was indeed in resonance: as the measured frequency was extremely close to the calculated natural frequency

« Last Edit: 01/12/2016 07:48 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
....

Rodal posted this back a few pages.  It may or may not relate to other frustums and would probably depend on material thickness ect.  It is the researchgate link.  http://forum.nasaspaceflight.com/index.php?topic=39004.msg1468340#msg1468340  I think it mentioned time to buckle if I remember correctly. 

Edit: I don't think much long term thrust could be had from this after thermal equilibrium is reached.  The force should decrease with time as the thermal expansion decelerates.  It might be helpful to know the time to thermal equilibrium.

Edit2: Also, any positive thrust signal observed due to thermal expansion should be an equal and opposite signal upon powering down.  Thermal contraction would give a negative thrust. 

1) The thermal buckling force is associated with the temperature profile evolution and it is also primarily associated with the structural stability of the structure in question.  The first thing to understand is that the flatness of a membrane, plate or a simple column is unstable, like a ball on top of a hill is in an unstable form of equilibirum.  A force pushing a ball on top of a hill, once it overcomes any small obstacles on the way, or friction, will result in the ball naturally seeking a more stable configuration towards the bottom of the hill.  Similarly, a force on a column will result in buckling of the column once the buckling force limit is exceeded.





2) The thermal buckling force rapidly increases with time until buckling is reached.  My article shows that, contrary to the intuition of some people that have stated that thermal effects are too slow to be responsible for artifacts in these experiments, it shows that the rise of this thermal buckling force can occur in the time frames associated with the force vs. time of these experiments. 

3) It does not necessarily follow that the structure will return to its previous unstable equilibrium configuration once the temperature decreases and the temperature returns to the original temperature.  For the structure to return to its previous point of unstable equilibrium a number of assumptions must take place.  For example, one must assume perfect elasticity of the structure.  Any deviation from perfect elasticity will prevent the structure to return to its previous unstable configuration: for example if the metal follows an elastic-plastic material stress-strain law or if there is friction involved.  In most of our experience in the real world we are accustomed to multiple examples of buckling and the familiar experience that flatness of plates and membranes are unstable configurations and that once buckling is experienced it is difficult for the structure to perfectly return exactly to its original unstable flat configuration.



4) The example of thermal buckling I gave on my paper was not meant as the only explanation for the thermal forces that can be encountered on these experiments.  On the contrary, if one goes back to read my post, one will find that I explicitly wrote that upon further examination it was found that  the main thermal effect on NASA's experiments by Dr. White's team at Eagleworks (NASA Johnson) was instead due to thermal expansion leading to shifting of the center of mass, which produced spurious force vs. time artifacts in NASA's experimental traces.  Also, it is trivial to show that these thermal effects will remain when testing in vacuum.  As in vacuum there is no thermal convection due to fluid flow in an atmosphere, these effects associated with thermoelasticity  (1) thermal buckling, 2) thermal expansion shifting of the center of mass, etc. etc.) may become more prominent as the thermal convection effects are eliminated.
« Last Edit: 01/05/2016 04:00 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
EVIDENCE THAT NASA'S TEST WITHOUT A DIELECTRIC INSERT WAS PERFORMED AT A RESONANT NATURAL FREQUENCY OF THE CAVITY


For thoroughness, to further "put to bed" the hypothesis advanced by others "that NASA's test without a dielectric insert was not in resonance", we calculate the natural frequency of mode shape TE012 for NASA's dimensions, under the naive assumption that it would have had spherical ends instead of flat ends, to show that spherical ends have a negligible effect on the natural frequency. 

We use the dimensions given by Frank Davies (NASA/JSC/EP5) (see Frank Davis document https://forum.nasaspaceflight.com/index.php?action=dlattach;topic=39214.0;attach=1091650 )

Quote from:  Frank Davies NASA
Bottom diam.: 11.01 inch (279.7 mm)
Top diam.: 6.25 inch (0.1588 mm)
Height: 9.00 inch (228.6 mm)
Material: 101 Copper Alloy

I used the following input parameters, as used by Frank Davis:

bigDiameter = (11.01 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (6.25 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (9 inch)*(2.54 cm/inch)*(1 m/(100 cm));

tanHalfAngleCone = (bigDiameter - smallDiameter)/(2*axialLength);

halfAngleConeRadians = ArcTan[tanHalfAngleCone];

halfAngleConeDegrees = (180/Pi)*halfAngleConeRadians;

r2 = bigDiameter/(2*Sin[halfAngleConeRadians]);

r1 = smallDiameter/(2*Sin[halfAngleConeRadians]);

Notice that, for this calculation we naively calculate the natural frequency for NASA's test without a dielectric as if it would have had spherical ends instead of flat ends (the case for flat ends was calculated in my previous post).

These input parameters result in the following values (in SI units) for the spherical radii and the cone half angle:

r1 = 0.310475 m

r2 = 0.546933 m

notice that these spherical radii are longer than the ones previously calculated for the case simulating flat ends.

halfAngleConeDegrees = 14.8125 degrees

which is the same cone angle previously calculated for the case simulating flat ends.





COMPARISON OF SOLUTIONS WITH NASA's experiment

measured frequency at which NASA test was performed:                                              2.168 GHz
calculated natural frequency (exact solution, assuming  spherical ends):                     2.129 GHz
calculated natural frequency (exact solution, assuming  flat ends):                               2.165 GHz

calculated natural frequency (COMSOL FEA analysis by Frank Davis at NASA):            2.179 GHz

difference between NASA's COMSOL FEA and measurement: (2.179 - 2.168)/2.168 = 0.5%

difference between exact solution (flat ends) and measurement: (2.165 - 2.168)/2.168 = -0.1%

difference between exact solution (spherical ends) and measurement: (2.129 - 2.168)/2.168 = -1.80%


Therefore we have shown that even when assuming that the resonant cavity would have had spherical instead of flat ends, the natural frequency would have differed from the tested frequency by only 1.80%

(NASA's test was performed with a resonant cavity that had flat ends, which has a calculated natural frequency in excellent agreement with NASA's tested frequency)




Here is an updated list of calculations that verify the experimental measurement by NASA:



COMPARISON OF SOLUTIONS WITH NASA's experiment (as of Dec 19, 2016)

measured frequency at which NASA test was performed:                                              2.168 GHz
calculated natural frequency (Rodal exact solution, assuming  spherical ends):                 2.129 GHz
calculated natural frequency (Rodal exact solution, assuming  flat ends):                       2.165 GHz
calculated natural frequency (analysis by X_Ray method A):                                         2.16512 GHz
calculated natural frequency (analysis by X_Ray method B):                                         2.1653438 GHz

calculated natural frequency (COMSOL FEA analysis by Frank Davis at NASA):               2.179 GHz
calculated natural frequency (FEKO BEM analysis by Monomorphic):                              2.17895Ghz
calculated natural frequency (Keysight EMPro FEA analysis by X_Ray):                          2.17983GHz
« Last Edit: 12/19/2016 06:21 PM by Rodal »

Offline RotoSequence

  • Full Member
  • ****
  • Posts: 677
  • Liked: 513
  • Likes Given: 704
Is it possible to evaluate these experimental device's sensitivities to deviations from the calculated optimum resonance frequency at this time?

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Is it possible to evaluate these experimental device's sensitivities to deviations from the calculated optimum resonance frequency at this time?

Great question.

Using the definition of quality of resonance

we can express it as

Δf /f = 1/Q

where
f= natural frequency
Δf= bandwidth of resonance
Q= quality factor of resonance (a measure inverse to damping)

Q= 1/(2ζ)

where ζ  is the damping ratio

and hence we can express the previous difference between calculation and experimental frequency in terms of an equivalent maximum Q for the calculated frequency to be in the frequency bandwidth of the experimentally measured frequency

difference between NASA's COMSOL FEA and measurement: (2.179 - 2.168)/2.168             = 0.5%     = 1/200

difference between  (Mathematica) exact solution and measurement: (2.165 - 2.168)/2.168= -0.1% = - 1/1000

So,

1) the COMSOL Finite Element Analysis carried out by NASA would only have found the bandwidth of resonance for a Q smaller than 200

2) the exact solution with Wolfram Mathematica would only have found the bandwidth of resonance for a Q smaller than 1000

We can also restate this as:

for a Q = 10,000 one needs a numerical solution to differ from the measured frequency by less than 0.01%

for a Q = 50,000 one needs a numerical solution to differ from the measured frequency by less than 0.002%

for a Q = 100,000 one needs a numerical solution to differ from the measured frequency by less than 0.001%

________

Conclusion: no numerical solution is close to the precision needed to find the bandwidth of resonance for the high Q that one is seeking for.  The higher the Q, the more precision is needed.

The precision is unattainable because one does not know the exact geometry of the resonant cavity to that precision.

All that one can do with the numerical solutions for a resonant cavity with a high Q (>10,000) is to tell where the resonance is, to a precision less than 1%, perhaps 0.1%.  Finding the actual bandwidth of resonance and the resonance peak has to be done empirically, experimentally by S21 and S11 measurements.

So, for NASA's experiment without a dielectric we can say that the exact solution says that there was indeed a natural frequency for mode TE012 within 0.1% of the measured frequency, but as to whether the measured frequency was at the resonant peak, one has to rely on NASA's team having actually found peak resonance with S21 and S11 measurements (because we don't know the dimensions of NASA's resonant cavity to the precision required to calculate the resonant peak with all the required digits of numerical precision).

________

If I have the time, I would like to make a plot of the resonant frequency calculated by the exact solution, for variable:

1) End diameter
2) Cone angle
3) Length
« Last Edit: 01/06/2016 01:18 AM by Rodal »

Offline dustinthewind

  • Full Member
  • ****
  • Posts: 494
  • U.S. of A.
  • Liked: 195
  • Likes Given: 220
...

Thanks for clarifying that you addressed separately the buckling and expansion.  I apologize as I may have not made that clear in my statement.  I was away a day so wasn't able to respond immediately.  I take it that buckling is a one time thing right?  So if it occurs once it shouldn't occur again once the deformation is permanent?  If so than yeah, I guess that would give a single impulse without a retraction event. 

You mentioned the artifact thrust from NASA is partly due to thermal expansion or displacement of mass center?  Did they also observe the retraction of that center of mass when they shut down the power?  I have to think back to those plots and I vaguely think that maybe there was a retraction event, but I will have to go back and look. 

I was thinking that if we knew the time to thermal equilibrium, would it be advisable to at least keep the frustum powered on for a time longer than that so that we are observing more than just those events?

Edit: well to compound the problem in most cases the frustum is in air so there is the convection problem included with long term tests.

Edit2: one thing that worries me is putting the frustum in an insulated box (closed system) might eliminate thermal equilibrium (continuous heating) unless maybe an internal heat sink could reduce that to some extent. 
« Last Edit: 01/07/2016 07:33 AM by dustinthewind »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...

Thanks for clarifying that you addressed separately the buckling and expansion.  I apologize as I may have not made that clear in my statement.  I was away a day so wasn't able to respond immediately.  I take it that buckling is a one time thing right?  So if it occurs once it shouldn't occur again once the deformation is permanent?  If so than yeah, I guess that would give a single impulse without a retraction event. 

You mentioned the artifact thrust from NASA is partly due to thermal expansion or displacement of mass center?  Did they also observe the retraction of that center of mass when they shut down the power?  I have to think back to those plots and I vaguely think that maybe there was a retraction event, but I will have to go back and look. 

I was thinking that if we knew the time to thermal equilibrium, would it be advisable to at least keep the frustum powered on for a time longer than that so that we are observing more than just those events?

Edit: well to compound the problem in most cases the frustum is in air so there is the convection problem included with long term tests.

Edit2: one thing that worries me is putting the frustum in an insulated box (closed system) might eliminate thermal equilibrium (continuous heating) unless maybe an internal heat sink could reduce that to some extent. 

First I want to thank you for your post, which gives me the opportunity to clarify a number of issues.

1) It is my present understanding that all NASA Johnson Eagleworks experiments with a resonant cavity (both those in ambient conditions as well as those in vacuum) have the issue of thermal expansion shifting the center of mass, and hence severely affecting the experimental results.  I am referring solely to thermal expansion shifting the center of mass, and I am not referring to thermal buckling.  It is my understanding (from Paul March's posts in the EM Drive threads) that Paul has accepted the importance of the effect due to thermal expansion shifting the center of mass.  Actually in his latest posts, Paul wrote that

a) NASA was working on software to automatically subtract the effect of thermal expansion shifting the center of mass from what they thought was the real anomalous force

b) NASA changed their experimental set-up to minimize this  effect of thermal expansion shifting the center of mass

2) It took some time for NASA to fully understand and be able to model the effect of thermal expansion shifting the center of mass, as its analysis is not trivial.  Early on, Paul March recognized that it was affecting the baseline.  If you look at NASA's traces of force vs. time this shifting of the baseline is seen in all traces, to a greater or lesser degree.  It changes with time and with test.  Initially NASA's team addressed the "shifting baseline" problem by subtracting the baseline shift.  However this did not address the effect on the magnitude of the force vs time itself, because the effect of thermal expansion shifting the center of mass  is registered as if it would be an anomalous force, when in effect is purely thermal.  In other words, to properly take into account this effect, one cannot do it solely by "rectifying the baseline".

3) It was only recently (late 2014, after their AIAA conference report) that NASA started to try to address the whole issue  of thermal expansion shifting the center of mass by trying to subtract this effect by modeling software that would model the effect on the magnitude of the force vs. time.  That's the situation as of the last time that Paul March was able to post in the NSF site  We are eagerly awaiting their upcoming publication of their experiments (I presume to happen in the near future in some AIAA refereed journal ?) to be able to explore their ability to fully address this effect of thermal expansion shifting the center of mass in their experiments.
« Last Edit: 01/10/2016 12:34 AM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
EXPERIMENTAL PROOF THAT NASA'S TEST WITHOUT A DIELECTRIC INSERT WAS IN RESONANCE AT THE FREQUENCY REPORTED IN NASA'S REPORT


Since we had concluded in http://forum.nasaspaceflight.com/index.php?topic=39214.msg1470613#msg1470613

that:

Quote
for NASA's experiment without a dielectric we can say that the exact solution says that there was indeed a natural frequency for mode TE012 within 0.1% of the measured frequency, but as to whether the measured frequency was at the resonant peak, one has to rely on NASA's team having actually found peak resonance with S21 and S11 measurements (because we don't know the dimensions of NASA's resonant cavity to the precision required to calculate the resonant peak with all the required digits of numerical precision).



Finally, we reproduce again the experimental data from NASA Johnson Eagleworks Laboratory that proves that their experiment without dielectric inserts in their frustum of a cone cavity was indeed in resonance.

The resonance for mode shape TE012 without dielectric inserts was measured with an Agilent Model 9923A, 4.0 GHz Field Fox Vector Network Analyzer (VNA) both in the S11 and S21 modes (as shown in the pictures below) using the frustum RF loop antenna as input and the frustum sense antenna located 180 degrees around from the loop antenna with both antennas being at the same 15% of the height from the large end of the frustum, i.e., 0.15 * 9.00” = 1.35” or 34.29mm away from the large end.   

The TE012 resonant frequency without the dielectric PE disc inserts was measured at 2.167137 GHz using either the S11 or S21 methods as shown by the two attached VNA slides. 

Thus, any claims made about this test without dielectric inserts in NASA's frustum of a cone cavity with mode shape TE012 at  2.167 GHz not being in resonance are shown to be completely baseless, false and misleading.

This, factual information shows without a doubt that indeed NASA's frustum of a cone without dielectric inserts was in resonance with mode shape TE012 at  2.167 GHz in agreement with NASA's report and in agreement with the COMSOL Finite Element Analysis calculation and in agreement with the exact solution I calculated using Wolfram Mathematica.
« Last Edit: 01/11/2016 07:40 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
EXPERIMENTAL PROOF THAT NASA'S TEST WITHOUT A DIELECTRIC INSERT WAS IN RESONANCE AT THE FREQUENCY REPORTED IN NASA'S REPORT


Since we had concluded in http://forum.nasaspaceflight.com/index.php?topic=39214.msg1470613#msg1470613

that:

Quote
for NASA's experiment without a dielectric we can say that the exact solution says that there was indeed a natural frequency for mode TE012 within 0.1% of the measured frequency, but as to whether the measured frequency was at the resonant peak, one has to rely on NASA's team having actually found peak resonance with S21 and S11 measurements (because we don't know the dimensions of NASA's resonant cavity to the precision required to calculate the resonant peak with all the required digits of numerical precision).



Finally, we reproduce again the experimental data from NASA Johnson Eagleworks Laboratory that proves that their experiment without dielectric inserts in their frustum of a cone cavity was indeed in resonance.

The resonance for mode shape TE012 without dielectric inserts was measured with an Agilent Model 9923A, 4.0 GHz Field Fox Vector Network Analyzer (VNA) both in the S11 and S21 modes (as shown in the pictures below) using the frustum RF loop antenna as input and the frustum sense antenna located 180 degrees around from the loop antenna with both antennas being at the same 15% of the height from the large end of the frustum, i.e., 0.15 * 9.00” = 1.35” or 34.29mm away from the large end.   

The TE012 resonant frequency without the dielectric PE disc inserts was measured at 2.167137 GHz using either the S11 or S21 methods as shown by the two attached VNA slides. 

Thus, any claims made about this test without dielectric inserts in NASA's frustum of a cone cavity with mode shape TE012 at  2.167 GHz not being in resonance are shown to be completely baseless, false and misleading.

This, factual information shows without a doubt that indeed NASA's frustum of a cone without dielectric inserts was in resonance with mode shape TE012 at  2.167 GHz in agreement with NASA's report and in agreement with the COMSOL Finite Element Analysis calculation and in agreement with the exact solution I calculated using Wolfram Mathematica.

Based on this measurement data I've got a look to my calculated frequency for this case and find:

Mode      calculated(GHz)   Comsol(GHz)  diff Comsol(%)  diff Comsol(GHz)  measured NASA(GHz)  diff meas.(%)
TE012   2,1653438127        2,1794            -0,64                -0,014                     2,167138                      -0,08279

Maybe its based on tiny differences between the final real measured cavity and the Comsol simulation.
Of course there are much larger differences for many of the other modes in my spreadsheet*. As I wrote elsewhere
I believe more in field simulations because it works.

* I use it only for general overview.
« Last Edit: 03/12/2016 04:36 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
EXPERIMENTAL PROOF THAT NASA'S TEST WITHOUT A DIELECTRIC INSERT WAS IN RESONANCE AT THE FREQUENCY REPORTED IN NASA'S REPORT


Since we had concluded in http://forum.nasaspaceflight.com/index.php?topic=39214.msg1470613#msg1470613

that:

Quote
for NASA's experiment without a dielectric we can say that the exact solution says that there was indeed a natural frequency for mode TE012 within 0.1% of the measured frequency, but as to whether the measured frequency was at the resonant peak, one has to rely on NASA's team having actually found peak resonance with S21 and S11 measurements (because we don't know the dimensions of NASA's resonant cavity to the precision required to calculate the resonant peak with all the required digits of numerical precision).



Finally, we reproduce again the experimental data from NASA Johnson Eagleworks Laboratory that proves that their experiment without dielectric inserts in their frustum of a cone cavity was indeed in resonance.

The resonance for mode shape TE012 without dielectric inserts was measured with an Agilent Model 9923A, 4.0 GHz Field Fox Vector Network Analyzer (VNA) both in the S11 and S21 modes (as shown in the pictures below) using the frustum RF loop antenna as input and the frustum sense antenna located 180 degrees around from the loop antenna with both antennas being at the same 15% of the height from the large end of the frustum, i.e., 0.15 * 9.00” = 1.35” or 34.29mm away from the large end.   

The TE012 resonant frequency without the dielectric PE disc inserts was measured at 2.167137 GHz using either the S11 or S21 methods as shown by the two attached VNA slides. 

Thus, any claims made about this test without dielectric inserts in NASA's frustum of a cone cavity with mode shape TE012 at  2.167 GHz not being in resonance are shown to be completely baseless, false and misleading.

This, factual information shows without a doubt that indeed NASA's frustum of a cone without dielectric inserts was in resonance with mode shape TE012 at  2.167 GHz in agreement with NASA's report and in agreement with the COMSOL Finite Element Analysis calculation and in agreement with the exact solution I calculated using Wolfram Mathematica.

Based on this measurement data I've got a look to my calculated frequency for this case and find:

Mode      calculated(GHz)   Comsol(GHz)  diff Comsol(%)         diff Comsol(GHz)   measured NASA(GHz)       diff measured(%)
TE012   2,1653438127        2,1794              -0,64                          -0,014                     2,167138                              -0,08279

Maybe its based on tiny differences between the final real measured cavity and the Comsol simulation.
Of course there are much larger differences for many of the other modes in my spreadsheet*. As I wrote elsewhere I believe more in field simulations because it works.

* I use it only for general overview.

Excellent, your solution gives the same natural frequency (2.165 GHz) I calculated with the exact solution using Wolfram Mathematica, as reported here:  http://forum.nasaspaceflight.com/index.php?topic=39214.msg1469866#msg1469866

Further evidence that validates NASA's report that the test without dielectric insert was in TE012 mode shape resonance at the measured frequency !

I think that NASA built the truncated cone cavity to within measurement tolerances of +/-0.01” , giving internal dimensions as follows

bigDiameter = (11.00")  +/-0.01” ---> total % error = 0.18% = 1/550
smallDiameter = (6.25")  +/-0.01”--->total % error = 0.32% = 1/313
axialLength = (9")  +/-0.01” --->         total % error = 0.22% = 1/450

Therefore (taking the median total % error = 0.22% = 1/450) the dimensional tolerance of NASA's frustum is such that it is only for a Q<450 that one can hope to be within the resonant bandwidth, given the uncertainty due to dimensions ( 1/450).

Importantly, the difference between both X-Ray's solution and the exact solution using Wolfram Mathematica from the measured frequency of -0.1% is well within the geometrical tolerance uncertainty of NASA's truncated cone itself.  See NASA's design dimensions for their frustum of a cone, attached below
« Last Edit: 01/12/2016 02:09 AM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
QUALITY OF RESONANCE "Q" FOR NASA'S TEST WITHOUT A DIELECTRIC INSERT

Finally, what was the predicted Quality of Resonance ("Q") for NASA's test without a dielectric insert?

Using the following resistivity for the copper alloy used for this test:

Material: Copper alloy 101

resistivity = 1.71*10^(-8) ohm meter

Sources for this material value:
http://www.azom.com/article.aspx?ArticleID=2850#_Physical_Properties_of  http://www.husseycopper.com/production/alloys/electrical/c-101-00/

Using the following geometrical dimensions for the frustum of a cone, as used by Frank Davis:

bigDiameter = (11.01 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (6.25 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (9 inch)*(2.54 cm/inch)*(1 m/(100 cm));

the exact solution, using Wolfram Mathematica to solve Maxwell's equations, gives:

Q = 78642

So, a very good Q value is predicted for mode shape TE012 at the frequency:

measured frequency at which NASA test was performed:                                              2.168 GHz
calculated natural frequency (exact solution, Dr. Rodal using Wolfram Mathematica):      2.165 GHz

for NASA's test without a dielectric insert that resulted in no thrust.

The fact that this NASA test resulted in zero "anomalous force", and that Paul March at NASA had the great insight to introduce dielectric inserts at the small end to produce the anomalous force, is one of the most important data point in the history of EM Drive experiments
« Last Edit: 01/12/2016 08:29 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
QUALITY OF RESONANCE "Q" FOR NASA'S TEST WITHOUT A DIELECTRIC INSERT

Finally, what was the predicted Quality of Resonance ("Q") for NASA's test without a dielectric insert?

Using the following resistivity for the copper alloy used for this test:

Material: Copper alloy 101

resistivity = 1.71*10^(-8) ohm meter

Sources for this material value:
http://www.azom.com/article.aspx?ArticleID=2850#_Physical_Properties_of  http://www.husseycopper.com/production/alloys/electrical/c-101-00/

Using the following geometrical dimensions for the frustum of a cone, as used by Frank Davis:

bigDiameter = (11.01 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (6.25 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (9 inch)*(2.54 cm/inch)*(1 m/(100 cm));

the exact solution, using Wolfram Mathematica to solve Maxwell's equations, gives:

Q = 78642

So, a very good Q value is predicted for mode shape TE012 at the frequency:

measured frequency at which NASA test was performed:                                              2.168 GHz
calculated natural frequency (exact solution, Dr. Rodal using Wolfram Mathematica):      2.165 GHz

for NASA's test without a dielectric insert that resulted in no thrust.

The fact that this NASA test resulted in zero "anomalous force", and that Paul March at NASA had the great insight to introduce dielectric inserts at the small end to produce the anomalous force, is one of the most important data point in the history of EM Drive experiments
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.
« Last Edit: 01/12/2016 09:39 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.
You are correct on all counts  :)

I will be posting further...

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
...
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.
You are correct on all counts  :)

I will be posting further...
Thanks very much   :)
« Last Edit: 01/12/2016 09:52 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Notice the significance of this:

1) Two completely independent researchers, using completely different solutions (X-Ray and Rodal) predict that the NASA cavity had a natural frequency for mode shape TE012 at the same frequency at which NASA reported their experimental finding that the cavity without dielectric inserts experienced no "anomalous force"

2) Furthermore both independent researchers predict that the cavity should have experienced strong resonance with a Q=79000

This independently confirms:

3) NASA's experimental results are of such high quality that the frequencies and resonance can be independently confirmed by independent researchers

4) Roger Shawyer's strange claim that the cut-off frequency formula for open waveguides should also apply to a closed cavity like the one used by NASA is shown to be false regarding resonance, as per point 1 through 3 above, with Q=79000

« Last Edit: 01/12/2016 11:27 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Unloaded Q (Q_0) for eigen resonance conditions is related to the statistical lifetime of any given photon within the cavity before it disappear via conversion into heat.
The effective or loaded Q is different, it depends of the external Q and the coupling factor, in general it's (much) lower than the unloaded Q. The photon may coupled out of the cavity much earlier before it's statistical life time for the cavity alone(Q_0)is reached, it's energy is lost outside the cavity.
My Questions are now :

a) What's known about the external Q (value)in general for an amplifier (with/without external load and circulator) for Z_in=50 Ohm? What's the way to compute this?

b) What was the coupling factor in the NASA experiment?

I don't believe Q_0 is THE important value for thrust generation. Would be unnatural since the antenna feed is a connection to the outside, it can't be ignored.
« Last Edit: 01/21/2016 07:16 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
PROOF THAT THE QUALITY FACTOR OF RESONANCE "Q" SCALES LIKE √L AND THEREFORE THAT THE FORCE/POWER ALSO SCALES LIKE √L

1) Force per power input of EMDrive and its relationship to a photon rocket

We start with the definition of Power as the time derivative of work, and therefore equal to the vector dot product of force times velocity:



For an ideal photon rocket with a perfectly collimated photon beam, the exhaust velocity (not the spaceship velocity !!!) is the speed of light c and therefore:

F*c = Pin

where Pin is the Power input into the exhaust ("Power Input" here only stands for the power at this late stage, notice that there may be further losses from the power plant, coupling factor, etc.).  Therefore, for an ideal photon rocket, the force per input power is

(F /  Pin)photonRocket = 1/c

Side note: for rockets exhausting particles-with-mass at speeds much lower than the speed of light, for example ion thrusters, this ratio is 2/v instead of 1/c, where v is the speed of the particle-having-mass (as propellant particles-with-mass, unlike photons, need to be accelerated to the exhaust speed, see https://en.wikipedia.org/wiki/Spacecraft_propulsion#Power_to_thrust_ratio and  https://en.wikipedia.org/wiki/Specific_impulse#Energy_efficiency for the reason for the factor of 2, as E=(1/2)mv2 instead of E=mc2 https://en.wikipedia.org/wiki/Kinetic_energy#Relativistic_kinetic_energy_of_rigid_bodies ).  Therefore, the efficiency (F /  Pin) for ion thrusters is much larger than the one for photon rockets (since v<< c, and hence 2/v>>1/c) and that's why this type of photon rocket has not seen, and is not envisioned to have, practical use.

Interestingly the force per input power for the EM Drive, according to all three different theories (McCulloch, Shawyer and Notsosureofit) can be expressed similarly as:

(F /  Pin)EMDrive = (1/c) Q g

where Q is the quality factor of resonance and g is a dimensionless factor due to geometry, relative magnetic permeability, relative electric permittivity and mode shape, depending on the theory.  So, the force per input power for an EM Drive is predicted to be superior to a photon rocket as follows:

(F /  Pin)EMDrive/ (F /  Pin)photonRocket  =  Q g

in other words, the theoretical outperformance of the EMDrive is speculated to be due to just the quality of resonance Q and the dimensionless factor g.

For the purpose of this discussion we will avoid dealing with the strange consequences of these theories regarding conservation of momentum and conservation of energy issues inherent to the concept of proposing a closed resonant electromagnetic cavity for space propulsion.




2) Geometric factor "g" for different theories

2a) McCulloch

McCulloch has presented a number of simple formulas for the EMDrive (http://www.ptep-online.com/index_files/2015/PP-40-15.PDF), all having the general form

(F /  Pin)EMDrive = (1/c) Q g

The simplest of which has the following simple definition for the dimensionless factor "g":

g=(L/Ds - L/Db)

where:

L = length of fustrum of a cone, measured perpendicular to the end faces
Ds=diameter of small end of the fustrum of a cone
Db=diameter of big end of the fustrum of a cone

So, it is evident that for this formula from McCulloch, the factor "g" is a dimensionless factor that only depends on the geometrical ratios L/Ds and L/Db:

gMcCulloch = g (L/Ds,L/Db)

It is also obvious that if one scales the EM Drive geometry such that the geometrical ratios L/Ds and L/Db are kept constant, that the dimensionless factor "g" will remain constant in McCulloch's equation.

2b) Shawyer

Shawyer has presented a formula ( http://www.emdrive.com/theorypaper9-4.pdf ) for the EM Drive where the dimensionless factor "g" is defined as follows:

g = 2 Df

where Df is a dimensionless factor called the "Design Factor" by Shawyer, and where Df is a function of the diameters and in addition it is also a function of the relative magnetic permeability and the relative electric permittivity, as well as the natural frequency of resonance:

gShawyer =    g(Db/Ds,L/Dbrr,m,n,p)

where the diameters of the fustrum of a cone appear explicitly in his formula for the "design factor" and where the length and the mode shape quantum numbers appear only implicitly because the design factor is dependent on the natural frequency at which resonance with a particular mode shape occurs.

It is simple to show that if one scales the EM Drive geometry such that the geometrical ratios L/Ds and L/Db, and the material properties μrr are kept constant, and the mode shape is kept the same, that the dimensionless factor "g" will remain constant in Shawyer's equation.

2c) Notsosureofit

Notsosureofit has presented a formula for the EM Drive (http://emdrive.wiki/@notsosureofit_Hypothesis) where the dimensionless factor "g" is defined as follows:

g=(Ψmn2/(4π3))(c/fmnp)3(1/L)(1/(Ds) 2-1/(Db)2)

where

Ψmn= Xmn (the zeros of the cylindrical Bessel functions) for TM modes
Ψmn= X'mn (the zeros of the first derivative of the cylindrical  Bessel functions) for TE modes

(Side note: This link is an excellent source for the numerical values of Xmn and of  X'mn for m<11 and n<6: http://wwwal.kuicr.kyoto-u.ac.jp/www/accelerator/a4/besselroot.htmlx )

Therefore, it can be shown that the "g" factor in Notsosureofit's hypothesis is a function of the geometrical ratios

gNotsosureofit =    g(L/Ds,L/Dbrr,m,n,p)

how this is so, will be shown in detail in the next section.




3) Natural frequency scaling

For simplicity, since the truncated cone resonant cavities tested by NASA, Shawyer, Tajmar, and others have all been close to a cylindrical cavity, we will derive the scaling relationship for the natural frequencies of a cylindrical cavity, but this can also be done with the more complicated equations for a truncated cone (which instead of cylindrical Bessel functions are expressed in terms of spherical Bessel functions, and instead of harmonic (cosine) functions in terms of Associated Legendre functions).  The reason why all EM Drive experiments have been performed up to now with EM Drive geometries close to a cylindrical cavity is because experimenters have tried to follow Shawyer's strange prescription that the small diameter of the truncated cone should be larger than the cut-off frequency for an open, constant-cross-section waveguide having the same diameter (although the EM Drive is a closed cavity, and not an open waveguide, and it is well-known that such cut-off equations are inapplicable to closed cavities).  This prescription forbids geometries of truncated cones where the small diameter is much different from the large diameter.  Therefore it turns out that one can use a mean radius

R = (Ds + Db)/4

to model the fustrum of a cone cavity as a cylindrical cavity, having natural frequencies

fmnp=(c/R) amnp

where c is the speed of light, R is the previously defined mean radius and where m,n,p are the so called "quantum numbers" defining the mode shape, where m is the integer related to the circumferential direction, n is the integer related to the polar radial direction and p is the integer related to the longitudinal axial direction.

And where

amnp= √((( Ψmn/ π)2+(p R/L)2)/(4 μr εr))

It is also trivial to show that since

R = (Ds + Db)/4

then

R/L =(Ds/L + Db/L)/4

hence

amnp= amnp( L/Ds,L/Dbrr,m,n,p)

and that for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p, amnp will remain constant and hence that the frequency will scale like the inverse of any geometrical dimension:

fmnp=(c/R) constant
                         =(c/L) constant
                         =(c/ Db) constant
                          =(c/ Ds) constant

3a) Proof that Notsosureofit's dimensionless factor is constant for constant geometrical ratio, constant medium properties and constant mode shape

Returning to Notsosureofit's dimensionless factor expression in point 2c:

g=(Ψmn2/(4π3))(c/fmnp)3(1/L)(1/(Ds) 2-1/(Db)2)

and replacing the frequency expression:

fmnp=(c/R) amnp

one obtains:

g=(Ψmn2/(4π3))(R/amnp)3(1/L)(1/(Ds) 2-1/(Db)2)

and therefore,

gNotsosureofit =    g(L/Ds,L/Dbrr,m,n,p)

since:

R/L =(Ds/L + Db/L)/4
(R/Ds)2=((1/4)(1+Db/Ds))2
(R/Db)2=((1/4)(1+Ds/Db))2
(Db/Ds)= (L/Ds)/(L/Db)

Therefore for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p, the dimensionless factor g will remain constant.  It is trivial to show the same result for Shawyer's design factor, and hence for the dimensionless factor g in Shawyer's expression.

So, in general we can state that all theoretical expressions, McCulloch's, Shawyer's and Notsosureofit, are such that the dimensionless factor g will remain constant for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p.




4) Quality of resonance (Q) scaling

The definition of quality of resonance factor (Q) can be stated as follows (https://en.wikipedia.org/wiki/Q_factor#Definition_of_the_quality_factor):



Q ≝ ω EnergyStored /PowerLoss

where

ω = angular frequency
EnergyStored =∫Electromagnetic Energy Density dV
PowerLoss =  ((ω δ) /2)  (∫ Electromagnetic Energy Density dA)
                 = Rs  (∫ Electromagnetic Energy Density dA)/ μ
                 = ρ  (∫ Electromagnetic Energy Density dA)/ (μ δ)

where 

Rs = "surface resistance"
   = ρ / δ
ρ = resistivity of the interior wall of the EM Drive resonant cavity
μ = magnetic permeability of the interior wall of the EM Drive resonant cavity
   = μoμr
δ =skin depth (the penetration depth of the electromagnetic energy into the interior metal wall)
     (https://en.wikipedia.org/wiki/Skin_effect)

in general, for arbitrary frequencies, the skin depth is:


where

ε = electric permittivity of the interior wall of the EM Drive resonant cavity
   = εoεr

At angular frequencies ω much below 1/(ρε), for example, in the case of copper, for frequencies much below exahertz (10^9 GHz, the range of hard X-rays and Gamma rays), the skin depth can be expressed as follows:



Now, using the fact that

PowerLoss =((ω δ) /2)  (∫ ElectromagneticEnergy dA)

one immediately obtains:

Q=(2/SkinDepth)( ∫Electromagnetic Energy Density dV/ ∫ Electromagnetic Energy Density dA)

Alternatively one can arrive at the same result, using the formula for power loss that depends on the "surface resistance" Rs:

PowerLoss = Rs  (∫ Electromagnetic Energy Density dA)/ μ
PowerLoss = ρ  (∫ Electromagnetic Energy Density dA)/ (μ δ)

one gets:

Q = ω μ    (∫Electromagnetic Energy Density dV)/ (Rs ∫ Electromagnetic Energy Density dA)
Q = ω μ δ (∫Electromagnetic Energy Density dV)/ (ρ ∫ Electromagnetic Energy Density dA)

and using the fact (at angular frequencies ω  much below 1/(ρε) ) that the angular frequency ω is a function of the square of the skin depth δ:

ω = 2 ρ / (μ δ2)

it is straightforward to show that the quality of resonance Q is:

Q=(2/SkinDepth)( ∫Electromagnetic Energy Density dV/ ∫ Electromagnetic Energy Density dA)

the electromagnetic energy density integrated over the cavity volume, divided by the electromagnetic energy density integrated over the cavity surface area, divided by the skin depth.

4a) Skin depth scaling

At frequencies much below 1/(ρε) the skin depth can be expressed as

SkinDepth = √(ρ/(μ π fmnp))

where
ρ = resistivity of the interior wall of the EM Drive resonant cavity
ε = electric permittivity of the interior wall of the EM Drive resonant cavity
   = εoεr
μ = magnetic permeability of the interior wall of the EM Drive resonant cavity
   = μoμr
fmnp = resonant frequency at mode shape m,n,p
        = ωmnp/(2π)

Plugging in the expression for the frequency

fmnp=(c/R) amnp

into the skin depth expression, results in the following expression:

SkinDepth = √R√(ρ/(μ π c amnp))

or, using the previously derived expressions for amnp one concludes that the skin depth scales like the square root of any geometrical dimension, for constant resistivity and magnetic permeability of the interior wall of the cavity and for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p.

In other words, for increasing dimensions of the cavity, preserving all geometrical ratios, and keeping material properties constant and for the same mode shape, the skin depth will increase with the square root of the dimension, while the frequency will decrease, as the inverse of the dimension.




4b) Quality of resonance (Q) scaling

Having revealed the scaling law for the skin depth, what now remains to be shown is the scaling for the energy integral ratio in the expression for Q:

Q=(2/SkinDepth)(∫Electromagnetic Energy Density dV/ ∫ Electromagnetic Energy Density dA)

The expressions under the integrals are dependent on each mode shape, as the electromagnetic energy distribution depends on mode shape, of course.  However, we can notice that the lowest mode shapes (those with low values of m,n,p, for example TE012, TM212) have been of interest in the EM Drive experiments so far.  So, for simplification purposes we can assume that the distribution of the electromagnetic field is of low order, and hence not that much variable throughout the cavity, for low mnp number mode shapes (for example m=0 means a constant distribution in the azimuthal circumferential direction of the cavity).  Under this assumption one can (for approximation purposes) take the energy out of the integral:

(∫Electromagnetic Energy Density dV/ ∫ Electromagnetic Energy Density dA) ~
                     ~ (Electromagnetic Energy Density /Electromagnetic Energy Density) (∫dV/ ∫ dA )
                     ~ InteriorVolume/InteriorSurfaceArea
                     ~ π R2L/(2 π R (R+L) )
                     ~ R/(2(1+R/L))

and substituting this and the previously found scaling law for the skin depth, into the expression for the quality of resonance factor Q, leads to:

Q=(2/SkinDepth)(∫Electromagnetic Energy Density dV/ ∫ Electromagnetic Energy Density dA)
 ~(2/(√R√(ρ/(μ π c amnp)))) R/(2(1+R/L))
 ~ √R b

where the factor b is:

b = (1/((1+R/L)√(ρ/(μ π c amnp))))   

or, using the previously derived expressions for amnp one concludes that the quality of resonance (Q) scales like the square root of any geometrical dimension, for constant resistivity and magnetic permeability of the interior wall of the cavity and for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p.

In other words, for increasing dimensions of the cavity, preserving all geometrical ratios, and keeping material properties constant and for the same mode shape, the quality of resonance (Q) will increase with the square root of the dimension, also the skin depth will increase with the square root of the dimension, while the frequency will decrease, as the inverse of the dimension.

Furthermore, we previously proved that all three theories for the EM Drive (McCulloch, Shawyer and Notsosureofit) have expressions for the force/inputPower to be proportional to the quality of factor Q times a dimensionless factor g:

(F /  Pin)EMDrive/ (F /  Pin)photonRocket  =  Q g
(F /  Pin)EMDrive = (1/c) Q g

and we previously proved that the dimensionless factor g (for all three theories: McCulloch, Shawyer and Notsosureofit) remains perfectly constant for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p.

Therefore one concludes that the force per input Power (for all three theories: McCulloch, Shawyer and Notsosureofit) scales like the square root of any geometrical dimension, for constant resistivity and magnetic permeability of the interior wall of the cavity and for constant geometrical ratios, constant medium properties μrr, and for the same mode shape m,n,p.

In other words, to maximize the force per input power, according to all three theories: (McCulloch, Shawyer and Notsosureofit) the most efficient EM Drive would be as large as possible, this being due to the fact that the quality of factor of resonance Q (all else being equal) scales like the square root of the geometrical dimensions.
Small cavity EM Drive's (all else being equal) are predicted to have smaller quality of resonance Q and therefore smaller force/inputPower.

It is not clear whether this has been known to EM Drive experimenters, given the fact that the recent experiments by Prof. Tajmar at TU Dresden, Germany, (under advise from Roger Shawyer according to the report) were performed with a much smaller EM Drive, and the fact that there are several EM Drive researchers discussing really tiny EM Drives (as the group in Aachen, Germany) for use in CubeSats.  Such EM Drives are predicted to be much more inefficient, having substantially lower force/inputPower.
« Last Edit: 02/07/2016 02:20 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
I verified numerically what I wrote above, using the exact solution for a truncated cone in terms of spherical Bessel and associated Legendre functions, using Wolfram Mathematica,

Numerical verification analysis details




NASA test without dielectric insert

Input
bigDiameter = (11.01 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (6.25 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (9 inch)*(2.54 cm/inch)*(1 m/(100 cm));
Material: Copper alloy 101; resistivity = 1.71*10^(-8) ohm meter

Exact solution output
TE012 natural frequency = 2.16467 GHz
TE012 skin depth = 1.41457 micrometers
TE012 Q = 78,642.4




10x larger

Input
bigDiameter = (110.1 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (62.5 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (90 inch)*(2.54 cm/inch)*(1 m/(100 cm));
Material: Copper alloy 101; resistivity = 1.71*10^(-8) ohm meter

Exact solution output
TE012 natural frequency = 0.216467 GHz
TE012 skin depth = 4.43121 micrometers
TE012 Q = 251,049.

frequency scaling: (2.1646723144342628`*^9/2.1646723144342667`*^8)/10 =1.

Q scaling: (78642.44767279371`/251049.34868706256`)*Sqrt[10] = 0.990599




1/10 of original size

Input
bigDiameter = (1.101 inch)*(2.54 cm/inch)*(1 m/(100 cm));
smallDiameter = (0.625 inch)*(2.54 cm/inch)*(1 m/(100 cm));
axialLength = (0.9 inch)*(2.54 cm/inch)*(1 m/(100 cm));
Material: Copper alloy 101; resistivity = 1.71*10^(-8) ohm meter

Exact solution output
TE012 natural frequency = 21.6467 GHz
TE012 skin depth = 0.443121 micrometers
TE012 Q = 25,104.9

frequency scaling: (2.1646723144342628`*^9/2.164672314434267`*^10)*10 =1.

Q scaling: (78642.44767279371`/25104.934868706456`)/Sqrt[10] = 0.990599




We confirm:

when using the exact solution for resonance of a frustum of a cone, for constant resistivity and magnetic permeability of the interior wall of the cavity and for constant geometrical ratios, constant medium properties μr,εr, and for the same mode shape TE012:


* the frequency scales (exactly) like the inverse of any geometrical dimension

* therefore the skin depth scales (exactly) like the square root of any geometrical dimension

* the quality of resonance (Q) scales approximately like the square root of any geometrical dimension, within 1% accuracy

The 1% error is due to this approximation:

(∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA) ~
                     ~ (ElectromagneticEnergy/ElectromagneticEnergy) (∫dV/ ∫ dA )
                     ~ InteriorVolume/InteriorSurfaceArea
                     ~ π R2L/(2 π R (R+L) )
                     ~ R/(2(1+R/L))

approximating the behavior of the electromagnetic mode shape as being almost constant throughout the cavity (this approximation is pretty good for a low mode like TE012 but is expected to degrade if one considers higher modes)
« Last Edit: 01/15/2016 06:47 PM by Rodal »

Offline SeeShells

  • Senior Member
  • *****
  • Posts: 2287
  • Every action there's a reaction we try to grasp.
  • United States
  • Liked: 2862
  • Likes Given: 2501
VERY NICE!

Offline DaCunha

  • Member
  • Posts: 35
  • Liked: 38
  • Likes Given: 14
Highly interesting. So it is proven that the dielectric insert is essential.


But this makes it difficult to understand the optical path length changes.
Why should there be any significant change in path length, if the effect can be attributed to inertial mass fluctuations between the cavity ends?


Has there been any computation of the equivalent mass distribution that would, according to General Relativity , make for the path length change measured by Eagleworks ?



 
« Last Edit: 01/14/2016 01:46 PM by DaCunha »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.

Please notice that when reducing the size to 1/10 of the original size, you calculate a Q=24,985 down from the original Q=79,011

So that the scaling you calculate is:

X-Ray calculated Q scaling: (79,011/24,985)/Sqrt[10] = 1.0000

which goes exactly like the square root of the dimension, instead of my calculation (here: https://forum.nasaspaceflight.com/index.php?topic=39214.msg1474351#msg1474351 ) using the exact solution:

Rodal calculated Q scaling: (78642.44767279371`/25104.934868706456`)/Sqrt[10] = 0.990599

showing that the exact solution differs by 1% from the approximate rule of Q scaling like the square root.

I justify the 1% difference between the exact solution for Q and the approximation involved in the scaling calculations for Q, as due to the approximation of the energy integral in my discussion of Q scaling:

Quote
The 1% error is due to this approximation:

(∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA) ~
                     ~ (ElectromagneticEnergy/ElectromagneticEnergy) (∫dV/ ∫ dA )
                     ~ InteriorVolume/InteriorSurfaceArea
                     ~ π R2L/(2 π R (R+L) )
                     ~ R/(2(1+R/L))

approximating the behavior of the electromagnetic mode shape as being almost constant throughout the cavity (this approximation is pretty good for a low mode like TE012 but is expected to degrade if one considers higher modes)

QUESTION to X-Ray;  are you approximating the energy integral calculation in your Q calculation as above, and is that why your calculation results in perfect scaling of Q going like the square root of the dimension ?
« Last Edit: 01/15/2016 07:10 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
...
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.

Please notice that when reducing the size to 1/10 of the original size, you calculate a Q=24,985 down from the original Q=79,011

So that the scaling you calculate is:

X-Ray calculated Q scaling: (79,011/24,985)/Sqrt[10] = 1.0000

which goes exactly like the square root of the dimension, instead of my calculation (here: https://forum.nasaspaceflight.com/index.php?topic=39214.msg1474351#msg1474351 ) using the exact solution:

Rodal calculated Q scaling: (78642.44767279371`/25104.934868706456`)/Sqrt[10] = 0.990599

showing that the exact solution differs by 1% from the approximate rule of Q scaling like the square root.

I justify the 1% difference between the exact solution for Q and the approximation involved in the scaling calculations for Q, as due to the approximation of the energy integral in my discussion of Q scaling:

Quote
The 1% error is due to this approximation:

(∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA) ~
                     ~ (ElectromagneticEnergy/ElectromagneticEnergy) (∫dV/ ∫ dA )
                     ~ InteriorVolume/InteriorSurfaceArea
                     ~ π R2L/(2 π R (R+L) )
                     ~ R/(2(1+R/L))

approximating the behavior of the electromagnetic mode shape as being almost constant throughout the cavity (this approximation is pretty good for a low mode like TE012 but is expected to degrade if one considers higher modes)

QUESTION to X-Ray;  are you approximating the energy integral calculation in your Q calculation as above, and is that why your calculation results in perfect scaling of Q going like the square root of the dimension ?
It scales like the square root because it is the square root ;)
And yes for higher quantum number the calculated values will be problematic, not ad hoc obviously but in relation to field simulations.
« Last Edit: 01/15/2016 09:01 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
These are great news. :) I came to nearly the same conclusion some time last year(Q=79011). I never post it, at least I am not sure about the formula (found an approximation in an cern paper about cavities if my memory is correct) and my implementation. No -3dB bandwidth needed for the calculation, its mode,volume, conductivity dependent.
Based on this the Q at larger volumes is in general (mode dependent) bigger than for smaller volume. I think more energy can be stored in larger volumes.
If I try to use to divide all dimensions by a factor of 10, I get a 10 times higher resonant frequency (good so far) but a Q of only 24985. Could you so kind to check this please, I can feel something may still wrong with this calculation although the number for the original dimensions fits yours very well.

Please notice that when reducing the size to 1/10 of the original size, you calculate a Q=24,985 down from the original Q=79,011

So that the scaling you calculate is:

X-Ray calculated Q scaling: (79,011/24,985)/Sqrt[10] = 1.0000

which goes exactly like the square root of the dimension, instead of my calculation (here: https://forum.nasaspaceflight.com/index.php?topic=39214.msg1474351#msg1474351 ) using the exact solution:

Rodal calculated Q scaling: (78642.44767279371`/25104.934868706456`)/Sqrt[10] = 0.990599

showing that the exact solution differs by 1% from the approximate rule of Q scaling like the square root.

I justify the 1% difference between the exact solution for Q and the approximation involved in the scaling calculations for Q, as due to the approximation of the energy integral in my discussion of Q scaling:

Quote
The 1% error is due to this approximation:

(∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA) ~
                     ~ (ElectromagneticEnergy/ElectromagneticEnergy) (∫dV/ ∫ dA )
                     ~ InteriorVolume/InteriorSurfaceArea
                     ~ π R2L/(2 π R (R+L) )
                     ~ R/(2(1+R/L))

approximating the behavior of the electromagnetic mode shape as being almost constant throughout the cavity (this approximation is pretty good for a low mode like TE012 but is expected to degrade if one considers higher modes)

QUESTION to X-Ray;  are you approximating the energy integral calculation in your Q calculation as above, and is that why your calculation results in perfect scaling of Q going like the square root of the dimension ?

Yes, that shows that your scaling factor is exactly Sqrt[L], for example for scaling by a factor of 10, Sqrt[10] = 3.16227.

Your formula for Q uses the same approximation I used in the scaling !

Should be fine for low modes particular for TE01 modes (constant in m, and only a small variation in n). 

Thank you so much for sharing your calculation and making this clear  :) (No secret magic  ;) )

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
EDITED: work under review.  Will post more results

________________________________________________

As Tajmar disclosed that the tested fustrum of a cone had spherical ends (in a number of e-mails with Flux_Capacitor), and he did not provide a drawing clarifying what is the technical meaning of the dimensions he provided, it is not possible to have a unique interpretation of the dimensions of his resonant cavity: there are several possible interpretations of "height" of a frustum of a cone with spherical ends.

I will show results for two likely interpretations to what Tajmar refers as "height" of the fustrum of a cone with spherical ends:

A) The lateral length of the conical walls of the fustrum of a cone, from the small end to the big end

B) The length (from the small end to the big end) measured perpendicular to the lines defining the small and big diameters of the frustum of a cone.

All cases will assume that the small radius and big radius are the correct internal dimensions of the diameters divided by 2:

bigR = 0.0541 meter;
smallR = 0.0385 meter;

where the height given in the latest version of his AIAA paper, is assumed to be 1/2 the internal height so that the actual height is assumed twice that value (2*0.0686 meter)


This is the geometry defining the spherical radii r1, r2 and the halfconeangle "θ"



A1) Height assumed to mean the lateral length of the conical walls of the frustum of a cone, from the small end to the big end, where the height given in the latest version of his AIAA paper, (2*0.0686 meter) is assumed to be the internal height.  Then, we have:

bigR = 0.0541 meter;
smallR = 0.0385 meter;
axialLength = 2*0.0686 meter

halfAngleConeRadians =  ArcSin[(bigR - smallR)/axialLength];
halfAngleConeDegrees = (180/Pi)*halfAngleConeRadians
                                           = 6.5288 degrees

r1 = axialLength/((bigR/smallR) - 1)
    = 0.338603 meter

r2 = axialLength/(1 - (smallR/bigR))
   =  0.475803 meter

(r2 - r1)/2 = 0.0686 meter (as given)

EXACT SOLUTION

first natural frequency (mode shape TM010 (*)) = 2.36611 GHz

Q (resistivity = 1.71*10^(-8) ohm-meter (*Material: Copper alloy 101*)) =34515.3

second natural frequency (mode shape TM011 (*)) = 2.85355 GHz
_________________________________________________________________

B1) Height assumed to mean the length (from the small end to the big end) measured perpendicular to the lines defining the small and big diameters of the frustum of a cone, where the height given in the latest version of his AIAA paper, (2*0.0686 meter) is assumed to be the internal height.  Then, we have:

bigR = 0.0541 meter;
smallR = 0.0385 meter;
axialLength = 2*0.0686 meter

halfAngleConeRadians =  ArcTan[(bigR - smallR)/axialLength];
halfAngleConeDegrees = (180/Pi)*halfAngleConeRadians
                                           = 6.48682 degrees

r1 = smallR /Sin[halfAngleConeRadians]
    = 0.340784 meter

r2 = bigR /Sin[halfAngleConeRadians]
   =  0.478868 meter

(r2 - r1)/2 = 0.069042 meter

EXACT SOLUTION

first natural frequency (mode shape TM010 (*)) = 2.35096 GHz

Q (resistivity = 1.71*10^(-8) ohm-meter (*Material: Copper alloy 101*)) = 34626.3

second natural frequency (mode shape TM011 (*)) = 2.83528 GHz
_________________________________________________________________

 (*) The first mode shape in a truncated cone is NOT constant in the longitudinal direction.  We label it as TM010 here following the convention in these threads of calling the mode shape closest to the one in a cylindrical cavity, but it should be understood that TM010 electromagnetic fields vary in the longitudinal direction

(**) The theoretical Q, for perfect coupling should have been a little less than 34,000.  Since Tajmar's test had an awful small Q (48.8 in ambient conditions and 20 in partial vacuum), Tajmar's test had horribly bad coupling ! No doubt due to the way that they coupled the huge waveguide into the small cavity.
« Last Edit: 01/19/2016 01:57 AM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Nice analysis! Seems you put a lot of work into it.
It would be much easier if the paper would be correct at least for the used dimensions of the truncated cone by the group itself.
I like to remember, the first version of the paper was completely inconsistent in this point. After the first contact they corrected the radii and I am a little surprised now, obviously not all the dimensions were evaluated and corrected exactly.  ???
« Last Edit: 01/16/2016 09:08 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
UNDER CONSTRUCTION

CONSERVATION OF RELATIVISTIC MOMENTUM FOR REACTION-LESS PROPULSION THROUGH VARIABLE INERTIAL MASS

A) Minotti shows the EM Drive force to be due to a gravitomagnetic General Relativity effect (coupling of
a 4-dimensional version of Kaluza-Klein's unified field theory of gravitation and electromagnetism, built around the idea of a fifth dimension beyond the usual four of space and time coupled to an external scalar field ψ, which in turn couples to matter),

(Fernando O. Minotti, Scalar-tensor theories and asymmetric resonant cavities, Grav. & Cosmol. 19 (2013) 201, http://arxiv.org/abs/1302.5690)

Minotti states that the weak energy condition (the condition that demands  that the mass should be greater than zero) (https://en.wikipedia.org/wiki/Energy_condition#Weak_energy_condition), is violated for the EM Drive in Minotti's theory.

B) Minotti also references Lobo and Visser's paper

(Francisco S. N. Lobo, Matt Visser, Fundamental limitations on "warp drive" spacetimes, Class.Quant.Grav. 21 (2004) 5871-5892, http://arxiv.org/abs/gr-qc/0406083)

that states that the weak energy condition (requiring positive mass) is also violated in other models of propellant-less (reaction-less) forms of proposed space-propulsion.

C) McCulloch,

(M. E. McCulloch, "Can the Emdrive Be Explained by Quantised Inertia?", PROGRESS IN PHYSICS Issue 1, Volume 11, (January 2015) (http://www.ptep-online.com/index_files/2015/PP-40-15.PDF) and "Testing quantised inertia on the emdrive",  EPL (Europhysics Letters), Volume 111, Number 6, 1 October 2015)

also proposes that the EM Drive self-accelerates because radio frequency photons at the larger end have higher inertial mass, and therefore to conserve momentum in its reference frame, the cavity must move towards the narrow end.

This motivated me to analyze conservation of momentum for the EM Drive (or any such resonant cavity proposed for  reaction-less propulsion) analyzed as a lumped-mass that is able to change its inertial mass.  Thus, conservation of momentum of the EM Drive under these theories, would be satisfied, when duly taking into consideration the change in mass.

Here, I define momentum, using the relativistic definition of momentum

(https://en.wikipedia.org/wiki/Momentum#Relativistic_mechanics, https://en.wikipedia.org/wiki/Mass_in_special_relativity#The_relativistic_energy-momentum_equation, and
https://en.wikipedia.org/wiki/Tests_of_relativistic_energy_and_momentum),

which leads to the following equation:



I  then define the following dimensionless variables:

dimensionless change in mass



dimensionless change in velocity



dimensionless initial velocity



which allow me to express the conservation of relativistic momentum in terms of these dimensionless variables.  The equation (resulting from conservation of momentum)  gives the following dimensionless change in mass:



I then calculate the dimensionless change in mass as a function of the other two variables: 1) dimensionless change in velocity and 2) dimensionless initial velocity.  I plot the results using Wolfram Mathematica.

______________________________________________________________________

Results and discussion

1) Acceleration, with deltaV/InitialVelocity ranging from 0 to 2 and with InitialVelocity ranging from 0 to 10% of the speed of light



For this range we see the deltaMass/InitialMass to be practically independent of the magnitude of the InitialVelocity/c ratio. Acceleration implies a negative change in mass (decrease in inertial mass) from 0 (for zero change in velocity) to a decrease in mass of 60% of the initial mass for an increase in deltaV/InitialVelocity from 0 to 2. 

2) Acceleration, with deltaV/InitialVelocity ranging from 0 to 2 and with InitialVelocity ranging from 0 to the speed of light


 
For this range we see the deltaMass/InitialMass to strongly depend on the magnitude of the InitialVelocity/c ratio, for initial velocities exceeding 15% of the speed of light. Acceleration implies a negative change in mass (decrease in inertial mass) from 0 (for zero change in velocity) to a decrease in mass approaching 100% of the initial mass (at which point the magnitude of the negative mass is equal to the initial mass), for an increase in deltaV/InitialVelocity from 0 to 2.

A frontier is formed (for deltaMass/InitialMass= - 1), at speeds being a sizeable fraction of the speed of light, for which it is not longer possible to accelerate.

3) Acceleration, with deltaV/InitialVelocity ranging from 0 to 50 and with InitialVelocity ranging from 0 to 40% of the speed of light


 
For this range we see the deltaMass/InitialMass to strongly depend on the magnitude of the InitialVelocity/c ratio, for initial velocities exceeding 1.5% of the speed of light. Acceleration implies a negative change in mass (decrease in inertial mass) from 0 (for zero change in velocity) to a decrease in mass approaching 100% of the initial mass (at which point the magnitude of the negative mass is equal to the initial mass).

A frontier is formed (for deltaMass/InitialMass= - 1), at speeds being a % of the speed of light, for which it is not longer possible to accelerate.

4) Acceleration, with deltaV/InitialVelocity ranging from 0 to 500 and with InitialVelocity ranging from 0 to 1% of the speed of light


 
For this range we see the deltaMass/InitialMass to strongly depend on the magnitude of the InitialVelocity/c ratio, for initial velocities exceeding 0.15% of the speed of light. Acceleration implies a negative change in mass (decrease in inertial mass) from 96% (for small change in velocity) to a decrease in mass approaching 100% of the initial mass (at which point the magnitude of the negative mass is equal to the initial mass).

A frontier is formed (for deltaMass/InitialMass= - 1), even at speeds being a small fraction of the speed of light, for which it is not longer possible to accelerate.

5) Based on the above plots we see that such a mode of space propulsion (reaction-less propulsion by variable mass) is quite limited on the speeds and changes in speed that it would be able to achieve.

6) Deceleration



For curiosity's sake we display what it would be like to decelerate by changing inertial mass.  Deceleration would be achieved by an internal increase in mass.  The needed increase in mass approaches infinity for speeds approaching the speed of light, or for deltaV/InitialVelocity approaching -100%

7) I also show a plot that includes the deceleration and acceleration ranges in the same plot.





Notes:
1) No warping of spacetime is considered in the analysis, only a reactionless variable mass is considered.

2) Forward (Robert Forward,  "Negative matter propulsion", Journal of Propulsion and Power, Vol. 6, No. 1 (1990), http://arc.aiaa.org/doi/abs/10.2514/3.23219?journalCode=jpp), and Bondi, have used similar expressions when discussing momentum conservation (https://en.wikipedia.org/wiki/Negative_mass#Runaway_motion), but they only consider the case of two bodies, one with identical absolute value of mass: one body with mass +m and another one with mass -m instead of the case being discussed here of continuous variability in mass.

3) The equations presented are frame-indifferent, but one of the variables chosen to present the results graphically, is not frame indifferent:  deltaV/InitialVelocity.DeltaV is obviously frame-indifferent, being a difference of velocities.  But the speed of light is clearly the only frame-indifferent speed to non-dimensionalize all variables, instead of using the initial velocity to non-dimensionalize the deltaV. 
« Last Edit: 02/05/2016 06:37 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
UNDER CONSTRUCTION
CONSERVATION OF RELATIVISTIC MOMENTUM FOR REACTION-LESS PROPULSION THROUGH VARIABLE INERTIAL MASS

It is better to express the solution solely in terms of frame-indifferent variables, as much as possible, which can readily be accomplished for the dimensionless form of the deltaV variable with the following change of variables, as follows:

deltaV/InitialVelocity =( deltaV/c ) / (InitialVelocity/c)

deltaV is obviously frame-indifferent, being a difference of velocities.  The speed of light is clearly the only frame-indifferent speed to non-dimensionalize all variables. 

The term  (InitialVelocity/c) appears naturally in the Lorentz (γ) factor of relativity  https://en.wikipedia.org/wiki/Lorentz_factor

Actually, one can substitute this variable (InitialVelocity / c) with γ:

InitialVelocity / c = √(1-(1/γ)2)

Therefore, we define a new dimensionless deltaV variable, by dividing deltaV by the speed of light instead of the initial speed:



we express the equation for the dimensionless change in mass only in terms of the above variable deltaV/c and the previously defined variable  InitialVelocity/c:



and the dimensionless change in mass :



Since conservation of momentum means that the relativistic momentum for the initial and final configurations are equal:



this equality becomes, when expressed in terms of the above-mentioned variables, the following expression:




and using this equation, one can plot the results as a function of these terms:

deltaMass/InitialMass = function (deltaV/c , InitialVelocity/c)

everything becomes more clear.  The reason why there is a frontier becomes clear.

Very interesting that small increases in speed mean much less need of negative mass

Small increase in speed (10^-6 deltaV/c) requires very small negative mass
« Last Edit: 02/09/2016 06:37 PM by Rodal »

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
.../...
and using this equation, one can plot the results in completely frame-indifferent terms:

deltaMass/InitialMass = function (deltaV/c , InitialVelocity/c)
.../...

It is not clear to me in your argument why InitialVelocity/c would qualify as "frame-indifferent". Different inertial observers could agree on deltaV/c but see different values for InitialVelocity/c, and hence predict different outcome deltaMass/InitialMass. The other way around, measuring a certain deltaMass/InitialMass and a certain deltaV/c would imply one peculiar InitialVelocity/c that would hold for one privileged inertial observer and not for the others. Am I misunderstanding something ?

Offline RERT

On a similar vein to frobnicat's post above: in the EM Drive thread you noted that deceleration requires significantly different negative mass creation than acceleration from rest.

But consider this: a) use your device to accelerate something to some velocity b) turn it off, so that the device now moves at constant velocity c) move your frame of reference to that inertial frame, so that the device is once again at rest d) rotate the device 180 degrees then switch it back on, accelerating in that new inertial frame (and decelerating in the original frame).

It is extremely hard to see how acceleration from rest in the opposite direction can require any different consumption of anything than in the original direction.

R.
[modified to make more sense after posting]
« Last Edit: 02/05/2016 08:46 AM by RERT »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Frobnicat and Rert

Thank you for your comments, they are much appreciated.  The post was labeled UNDER CONSTRUCTION when you posted them.

I need to get the time to finish my post, and then to carefully address the comments.  I will answer your questions once the "UNDER CONSTRUCTION" label is removed. ;)
« Last Edit: 02/05/2016 07:43 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
On a similar vein to frobnicat's post above: in the EM Drive thread you noted that deceleration requires significantly different negative mass creation than acceleration from rest.

But consider this: a) use your device to accelerate something to some velocity b) turn it off, so that the device now moves at constant velocity c) move your frame of reference to that inertial frame, so that the device is once again at rest d) rotate the device 180 degrees then switch it back on, accelerating in that new inertial frame (and decelerating in the original frame).

It is extremely hard to see how acceleration from rest in the opposite direction can require any different consumption of anything than in the original direction.

R.
[modified to make more sense after posting]
Although I need to have the time to finish my post, and remove the "UNDER CONSTRUCTION" label, these are my present thoughts

1) Please go over carefully, and in more detail the mathematics of your statements (actually I would appreciate seeing equations), regarding conservation of momentum, particularly this one:  <<d) rotate the device 180 degrees >>. Non-inertial systems are not equivalent, the space–time symmetries do not hold anymore as it was the case for inertial systems. Therefore, scientists living inside a box that is being rotated (or otherwise accelerated) can measure their frame motion or acceleration by observing the inertial forces on physical objects inside the box. In a rotating frame, one space direction is superior to all other directions, the axis of rotation which breaks the isotropy of space. In the case of our earth as a closed system, experiments like the Foucault pendulum can demonstrate the rotation of the earth, or by using gyrocompasses which can exploit the rotation of the earth to find the direction of true north during navigation.

2) Take a look at the mathematical solution. The solution function is continuous, for negative values of deltaV/c it is double-valued.  The asymmetry you are addressing in your response arises because you are arbitrarily taking into account only one of the possible values for negative deltaV/c.  Mathematically and for consistency you should instead take into account all  possible values of a multi-valued function, when addressing symmetry of a multi-valued function.

Since the solution is multi-valued I need the time to further examine it in both directions, and assess its physical significance (if any, because the notion of negative mass and energy is anything but intuitive! ).

Negative Mass-energy is prevented by the Weak Energy Condition (WEC), so this solution may not be physically possible if WEC holds.  And concerning the Casimir effect, as I have explained several times in previous threads, Profl. Jaffe at MIT, and others think that the Casimir effect (and other effects  that some view as "negative energy") can perfectly well be explained without resorting to the notion of negative energy.  So, I would not be too surprised if a negative mass-energy solution looks strange...but I will look at it further
« Last Edit: 02/05/2016 08:50 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
.../...
and using this equation, one can plot the results in completely frame-indifferent terms:

deltaMass/InitialMass = function (deltaV/c , InitialVelocity/c)
.../...

It is not clear to me in your argument why InitialVelocity/c would qualify as "frame-indifferent". Different inertial observers could agree on deltaV/c but see different values for InitialVelocity/c, and hence predict different outcome deltaMass/InitialMass. The other way around, measuring a certain deltaMass/InitialMass and a certain deltaV/c would imply one peculiar InitialVelocity/c that would hold for one privileged inertial observer and not for the others. Am I misunderstanding something ?
You are correct.  "Frame Indifferent" only applies to deltaV/c.

The InitialMass is the mass of the object in the object's rest frame.   

Every observer will agree on which frame is the rest frame.  Therefore, InitialVelocity is the the velocity of the object in its reference frame, the same frame used for the mass. 

This frame is a privileged, non-inertial frame.  If other frames of reference are used, not only the Initial Velocity will be different, but the Initial Mass will be different too, if measured in any frame other than the object's initial frame of reference to measure its mass.  The InitialVelocity/c term is also in the gamma factor, and is clearly intimately associated with the mass of the object.

We are discussing an acceleration problem, after all, where the speed of the same object changes, and therefore we have to deal necessarily with a non-inertial frame in regards to the InitialMass and the Initial Velocity associated with the same frame used to measure the rest mass. 

Still, expressing deltaV/c as a frame-indifferent variable helps to better understand the graphical output.

The solution makes sense for InitialVelocity/c = 0 (γ=1) (it agrees with Bondi's momentum equation for negative mass in that case). (H. Bondi, Negative Mass in General Relativity, Rev. Mod. Phys. 29, 423;1 July 1957)

It makes sense for InitialVelocity/c = 1 (γ=∞) , from the point of view that the only way to reach the speed of light,  InitialVelocity/c = 1 (γ=∞), is with zero mass (deltaMass/InitialMass=-1).

It makes sense that there is a frontier (deltaMass/InitialMass=-1) between deltaV/c and InitialVelocity/c and that this frontier is a straight line, since deltaV/c + InitialVelocity/c = FinalVelocity/c ≤ 1 and hence the frontier is specified by the linear constraint equation deltaV/c + InitialVelocity/c = 1, giving:

InitialVelocity/c = 1 - deltaV/c

At the frontier, InitialVelocity/c equals a constant (unity) minus a frame indifferent variable (deltaV/c). 

« Last Edit: 02/05/2016 11:59 PM by Rodal »

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
.../...
It is extremely hard to see how acceleration from rest in the opposite direction can require any different consumption of anything than in the original direction.
.../...

.../...
2) Take a look at the mathematical solution. The solution function is continuous, for negative values of deltaV/c it is double-valued.  The asymmetry you are addressing in your response arises because you are arbitrarily taking into account only one of the possible values for negative deltaV/c.  Mathematically and for consistency you should instead take into account all  possible values of a multi-valued function, when addressing symmetry of a multi-valued function.

Since the solution is multi-valued I need the time to further examine it in both directions, and assess its physical significance (if any, because the notion of negative mass and energy is anything but intuitive! ).
.../...

Following from the relativistic momentum conservation (on a single axis) I get the same expression as what you wrote for deltaMassBar as a function of deltavc and vbarc. From the initial equality (momentum conservation) to this last equality there is no need to take a square root of the equation, nor solve for solutions of second degree polynomial : so why do you say the function is double-valued, and specifically for negative deltavc ?

For instance with vbarc=1/2 and deltavc=-1/4 the expression unequivocally gives deltaMassBar=sqrt(5)-1 > 0
What would be an other solution ?

I understand this is under construction  :P

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
.../...
It is extremely hard to see how acceleration from rest in the opposite direction can require any different consumption of anything than in the original direction.
.../...

.../...
2) Take a look at the mathematical solution. The solution function is continuous, for negative values of deltaV/c it is double-valued.  The asymmetry you are addressing in your response arises because you are arbitrarily taking into account only one of the possible values for negative deltaV/c.  Mathematically and for consistency you should instead take into account all  possible values of a multi-valued function, when addressing symmetry of a multi-valued function.

Since the solution is multi-valued I need the time to further examine it in both directions, and assess its physical significance (if any, because the notion of negative mass and energy is anything but intuitive! ).
.../...

Following from the relativistic momentum conservation (on a single axis) I get the same expression as what you wrote for deltaMassBar as a function of deltavc and vbarc. From the initial equality (momentum conservation) to this last equality there is no need to take a square root of the equation, nor solve for solutions of second degree polynomial : so why do you say the function is double-valued, and specifically for negative deltavc ?

For instance with vbarc=1/2 and deltavc=-1/4 the expression unequivocally gives deltaMassBar=sqrt(5)-1 > 0
What would be an other solution ?

I understand this is under construction  :P

For acceleration (deltaV/c>0) there is only one way to satisfy conservation of momentum: creation of negative mass.



which is a single manifold occupying only half of the area defined by deltaV/c and InitialV/c.

In contrast,

there are two possible ways to achieve negative values of deltaV/c (associated with two different values of InitialVelocity/c):

1) with negative mass creation, such that deltaMassBar is negative.  You have to create this negative mass out of thin air, since it is production of a negative quantity, I can see Rert calling this as "consumption"

2) with positive mass creation, such that deltaMassBar is positive (I would call this positive mass creation "production" instead of consumption.)

Rert is arbitrarily only following choice #1 and ignoring #2, since he referring to "consumption" of something in:

Quote
It is extremely hard to see how acceleration from rest in the opposite direction can require any different consumption of anything than in the original direction.

I would not consider creation of positive mass as "consumption". Actually it is the opposite of consumption.

Neither he, nor I, stated any constraint  that choices of deltaV/c and InitialVelocity/c were to be constrained to manifold #1 instead of being able to use manifold #2.  So, when discussing deceleration, both options #1 and #2 have to be considered.
 
The terminology used could have been improved from "multivalued" to multi-folded sheets or something like that  ;), but again this is under construction and the answers are under construction ;)

« Last Edit: 02/06/2016 01:24 AM by Rodal »

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
I see. So actually deltaMass<0 for deltaV<0 only if deltaV<-initialV. In the common sense (relative to an implicit ground) this is more than deceleration, this is deceleration followed by acceleration in the opposite direction (and going through an infinite in the equation...)

Sticking to a "reasonable" deceleration of magnitude such that the vehicle doesn't goes to a halt (it still goes in the same direction, only slower) that is for -initialV<deltaV<0 (assuming initialV>0) then deltaMass>0 always holds. It doesn't seem "arbitrary" when discussing deceleration (in the common sense) to "choose" such deltaV that -initialV<deltaV<0. For instance an unpowered ground vehicle that can only brake (only dissipate energy) can't brake so hard as to get a deltaV in excess of -initialV.

I can't help but have the impression that from the start your approach defines an implicit ground... In Newtonian / Galilean relativity terms :
m1v1=m2v2 is not covariant (unless v2=v1 and m2=m1)
m1(v1+vc) ≠ m2(v2+vc)

I'd say that m1v1=m2v2 (and v2≠v1) hides a term for an implicit mass at 0 velocity (either left hand or right hand, I'll choose left hand) :
m1v1+(m2-m1)0=m2v2
which is now fully covariant :
m1(v1+vc)+(m2-m1)(0+vc)=m2(v2+vc)

See what I mean ? A non covariant equation seems ill defined, unphysical from start.

Obviously, again sticking to Newtonian mechanics for sake of simplicity, a closed system that respects both momentum and energy conservation just goes
m1v1=m2v2
˝m1v1˛=˝m2v2˛
=> v1=v2 and m1=m2

unless m1=m2=0 for instance as the result of the vehicle being composed of same amount of positive and negative mass (or of being non existent ?)

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Yes, the vehicle would have to start, at zero speed with a negative mass equal (in absolute magnitude) to its positive mass (*) .  But if creation of negative mass is allowed (in other words if one can violate the Weak Energy Condition), all this means is that the necessary amount of negative mass, for acceleration from a complete stop starts at 100% (equal in absolute magnitude to the amount of positive mass) and to accelerate further (until speeds of about ~1/2 of c) the amount of negative mass needed to be created for successive deltaV increases, diminishes with time.

After all, in Bondi's and Forward's drive concepts, one starts with a full 100% of negative mass (in the diametral drive, with 2 masses, one having 100% positive mass and the other mass having 100% negative mass).

But conversely (as per Shawyer's/TheTraveller's argument that it has been found out in experiments that the EM Drive needs to be motivated  ;)   ::)    ???  ), the vehicle could be accelerated to a relative slow speed by some other conventional means, at which point (for relatively small speeds) the amount of negative mass necessary is much smaller, as shown in the following graph:





__________________________
(*) Excellent point about showing covariance, which is dependent on the initial mass.  You are correct the equations as written implied positive mass=-negative mass as initial condition, just like Bondi and Forward (that's why I had referenced that point)  :)

...The solution makes sense for InitialVelocity/c = 0 (γ=1) (it agrees with Bondi's momentum equation for negative mass in that case). (H. Bondi, Negative Mass in General Relativity, Rev. Mod. Phys. 29, 423;1 July 1957)
...

2) Forward (Robert Forward,  "Negative matter propulsion", Journal of Propulsion and Power, Vol. 6, No. 1 (1990), http://arc.aiaa.org/doi/abs/10.2514/3.23219?journalCode=jpp), and Bondi, have used similar expressions when discussing momentum conservation (https://en.wikipedia.org/wiki/Negative_mass#Runaway_motion), but they only consider the case of two bodies, one with identical absolute value of mass: one body with mass +m and another one with mass -m instead of the case being discussed here of continuous variability in mass.
« Last Edit: 02/06/2016 12:52 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
This very interesting quote is from Geoffrey Landis, then associated with the NASA Lewis Research Center (nowadays NASA Glenn):

GEOFFREY A. LANDIS "Comment on 'Negative matter propulsion'" Journal of Propulsion and Power, Vol. 7, No. 2 (1991), pp. 304-304.  doi: 10.2514/3.23327


Quote from:  GEOFFREY A. LANDIS
It is interesting to note that, in the spaceship consisting of
positive and negative mass elements discussed by Forward, as
the total mass of the spaceship approaches zero (M _ - M+ =~ 0)
the Brownian motion of the ship due to impact of various particles
will buffet it around at increasingly large velocities. Even
in a perfect vacuum, photons of cosmic background radiation
will become important if the mass is low enough. At
M _ ~ M+. the mass of the ship equals zero and any impact will
apparently send it moving off at the speed or light. (Actually
M will never precisely equal zero, as the ship will be constantly
absorbing and emitting thermal photons.)
A particle hitting a zero mass spaceship would, of course,
actually hit either the positive or negative mass portion. In a
ship consisting of nearly equal amounts of positive and
negative mass, the center of mass can move faster than either
of the constituent masses and will do so whenever the distance
between the two masses changes. Unless the ship is allowed to
come apart the true motion of the ship must eventually recon·
cile with the motion of the center or mass. This occurs due to
the force on the link connecting the masses. The force on the
link will cause the masses to move as described by Forward, so
that even a small initial impulse will cause very large change in
velocity if the positive and negative masses are nearly equal.
This fact is of use in propulsion: a very nearly zero mass spaceship
could be propelled by a flashlight.

The point being that if one were able to produce an amount of negative mass equal in absolute magnitude as the positive mass, then one might as well use (instead of the hypothetical EM Drive creating negative mass) the propulsion concept of Bondi/Forward since as they stated, and remarked by Landis, with M _ ~ M+ the Bondi/Forward pair of masses will self-accelerate to high speeds.

My contribution to this is to show that:

* for InitialVelocity/c >0 the amount of negative mass required quickly diminishes, such that small acceleration can result for very small amounts of negative mass (compared to the magnitude of the positive mass)

As shown in the above post.



https://en.wikipedia.org/wiki/Hermann_Bondi
« Last Edit: 02/07/2016 08:57 PM by Rodal »

Offline RERT

Dr.Rodal -

Frobnicat seems far more nimble than me at this stuff, but since you ask for some mathematical critique, I think I owe you a stab.

There are three frames involved: frame 0, where the object is initially at rest. Frame 1, where it has acquired its initial velocity, and frame 2, where it has acquired its final velocity.

The total mass-energy is thus m0c^2, and does not vary between frames: in fact I think your equation

m1V1/gamma1 = m2V2/gamma2 should be replaced by the conservation of the norm of the energy momentum vector, vis:

m0^2c^4 =
gamma1^2*m1^2*c^4-gamma1^2*m1^2*v1^2 =
gamma2^2*m2^2*c^4-gamma2^2*m2^2*v2^2

[I may or may not have this right, but there will definitely be mass-energy terms mixed with the momentum terms.] I'd take a stab that this is the kind of covariant formulation Frobnicat eluded to above. I won't comment on later parts of the analysis, since if my comment is correct the rest would not follow.

R.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Dr.Rodal -

Frobnicat seems far more nimble than me at this stuff, but since you ask for some mathematical critique, I think I owe you a stab.

There are three frames involved: frame 0, where the object is initially at rest. Frame 1, where it has acquired its initial velocity, and frame 2, where it has acquired its final velocity.

The total mass-energy is thus m0c^2, and does not vary between frames: in fact I think your equation

m1V1/gamma1 = m2V2/gamma2 should be replaced by the conservation of the norm of the energy momentum vector, vis:

m0^2c^4 =
gamma1^2*m1^2*c^4-gamma1^2*m1^2*v1^2 =
gamma2^2*m2^2*c^4-gamma2^2*m2^2*v2^2

[I may or may not have this right, but there will definitely be mass-energy terms mixed with the momentum terms.] I'd take a stab that this is the kind of covariant formulation Frobnicat eluded to above. I won't comment on later parts of the analysis, since if my comment is correct the rest would not follow.

R.
Relativity's Energy-Momentum Equation ( https://en.wikipedia.org/wiki/Energy%E2%80%93momentum_relation ) is an identity between energy and momentum that is satisfied exactly (just like 1=1) if energy is conserved. If  Relativistic Momentum is conserved and if Relativistic Energy is conserved, it trivially follows that Relativistic Energy-Momentum is conserved. [*]

The equations are well-known:

Lorentz factor (https://en.wikipedia.org/wiki/Lorentz_factor)

γ = 1/ √[1- (v/c)2]



Momentum (https://en.wikipedia.org/wiki/Momentum#Relativistic_mechanics)
p = γ m v

which is the equation I used in my post on conservation of momentum,



where, in the above equation and in the ones to follow, m is the rest mass m=mo, the mass of an object in its rest frame. 



KineticEnergy (https://en.wikipedia.org/wiki/Kinetic_energy#Relativistic_kinetic_energy_of_rigid_bodies)
K =  (γ -1) m c2



https://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence
PotentialEnergy = m c2 (the rest energy)



TotalEnergy =  E = KineticEnergy  + PotentialEnergy
                         =(γ -1) m c2 + m c2
                         = γ m c 2



Energy-Momentum Equation ( https://en.wikipedia.org/wiki/Energy%E2%80%93momentum_relation ) =

E2 = p2 c2 + m2 c4

or

m2 c4 = E2 - p2 c2

which is, and should obviously be, an identity (if you substitute the above values, you get the identity 1 = 1)


m2 c4 =(γ m c 2)2 - (γ m v)2 c2
          =γ2 m2 c 4 - γ2 m2 v2 c2

γ2 m2 v2 c2 = (γ2-1) m2 c 4

v2/c 2  = (γ2-1) /γ2

1 = 1

which is just a trivial statement of the identity that one equals one, since the Lorentz Factor

γ 2 = 1/ (1- (v/c)2)  by definition

QED (quod erat demonstrandum)




[*] And if energy (and mass) are not conserved, as in a system in which negative mass is continuously created (or destroyed), it would be incompatible to demand conservation of energy, since mass-energy is not conserved in such a system.

If mass-energy is continuously created or destroyed, one should use a single reference configuration, and it looks better to deal with conservation of energy separately.  Also, then as already discussed here: http://forum.nasaspaceflight.com/index.php?topic=39214.msg1488362#msg1488362 the InitialVelocity must be measured with respect to the same frame where the InitialMass of the object was measured.  This is an acceleration problem, hence the frame where the rest mass is measured is a privileged, non-inertial frame.   If other frames of reference are used, not only the Initial Velocity will be different, but the Initial Mass will be different too, if measured in any frame other than the object's initial frame of reference to measure (and keep track of) its changing mass.
« Last Edit: 02/09/2016 01:00 AM by Rodal »

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
Dr Rodal, you insist that you are here doing a study of a closed system composed of a single lump of mass, i.e. that there is no separation (separation in 2 parts from a single part, as would be the case for action/reaction) nor joining ("melting" of 2 parts in a single part, as would be the case for an inelastic collision/aggregation) nor bounce (2 parts exchanging momentum but still being separate before and after such interaction). Do I understand correctly your premise ?

If so, in a given frame, considering one lump of rest mass m1 at velocity v1 and associated γ1=γ(v1) before, and same singleton object of rest mass m2 at velocity v2 and associated γ2=γ(v2), taking both conservation of momentum and of (total) energy gives (following the equations you recall) :
γ1 m1 v1 = γ2 m2 v2  (CoM)
γ1 m1 c˛ = γ2 m2 c˛  (CoE)
⇔ (since c˛≠0)
γ1 m1 v1 = γ2 m2 v2
γ1 m1 = γ2 m2

γ1 m1 v1 = γ1 m1 v2
γ1 m1 = γ2 m2
⇔ (since γ1≠0)
m1 v1 = m1 v2
γ1 m1 = γ2 m2

Now, all depends on m1

m1=0, total mass of the closed system is 0 from start
since γ2≠0 ⇒ m2=0 : it must be that total mass stays 0
and v1 and v2 are independant

m1≠0, total mass of the closed system is not 0 from start
since m1≠0 ⇒ v1=v2 : it must be that velocity stays the same
and so γ1=γ2 and it follows that m1=m2

I don't see how you can start with a singleton lump closed system that's supposed to respect conservation of energy and conservation of mass in the framework of SR, even assuming possibility of negative rest mass (or imaginary rest mass, whatever) and not arrive at same conclusion. There appears to be a contradiction between your premise (closed system) and the conclusions you draw from equation of conservation of momentum alone, when a closed system needs both constraints to be taken together.

Also it is not clear how you consider mass...
.../...
where, in the above equation and in the ones to follow, m is the rest mass m=mo, the mass of an object in its rest frame.  Also, as already discussed here: http://forum.nasaspaceflight.com/index.php?topic=39214.msg1488362#msg1488362 the InitialVelocity must be measured with respect to the same frame where the InitialMass of the object was measured.  This is an acceleration problem, hence the frame where the rest mass is measured is a privileged, non-inertial frame.   If other frames of reference are used, not only the Initial Velocity will be different, but the Initial Mass will be different too, if measured in any frame other than the object's initial frame of reference to measure its mass.
.../...

<<m is the rest mass m=mo, the mass of an object in its rest frame.>>
All right so we are not using so called relativistic mass mrel, and by avoiding the traps of mrel we are conforming to prescriptions of modern physics teaching. Fine with me. That means that whenever we talk about mass we can be confident that is not mrel(v) a function of velocity, i.e. a covariant value that depends on inertial frame of reference, but we talk on an invariant value, that has the exact same value for all observers. The fact that we are talking of this invariant mo mass appears clearly in the equations, as for instance momentum is given by p=γmov otherwise we would have p=mrelv. To me it then appear as a contradiction to use mo in the equations but to discuss those very same variables as if they were a function of velocity :
<<but the Initial Mass will be different too, if measured in any frame other than the object's initial frame of reference to measure its mass. >>

And I fail to see the physical meaning of  <<InitialVelocity must be measured with respect to the same frame where the InitialMass of the object was measured>> that again seem to imply the use of mrel that is a function of velocity wrt. observer, when all equations use the invariant mo that is not a function of velocity wrt. observer

But all those question are less important than the first part of this post : unless we are talking of a 0 total (invariant) mass object from start, in SR a closed system single object with mo≠0 just has an inertial trajectory of constant velocity and constant (invariant) mass, in this later case the hypothesis of occurrence of negative mass doesn't change the game.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...I don't see how you can start with a singleton lump closed system that's supposed to respect conservation of energy and conservation of mass in the framework of SR...
Stop right there  :). Nowhere in the above considerations have I  considered conservation of energy.  I only addressed conservation of momentum, so far.

The post title said it 1) CONSERVATION OF MOMENTUM (not conservation of energy and 2) VARIABLE MASS in a closed system:

CONSERVATION OF RELATIVISTIC MOMENTUM FOR REACTION-LESS PROPULSION THROUGH VARIABLE INERTIAL MASS

On the contrary, it was stated that the equations imply continuous creation (or destruction) of negative mass, which is in violent contradiction with conservation of mass-energy (unless, of course FinalMass=InitialMass=0 for any deltaV).



« Last Edit: 02/09/2016 01:59 AM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Dr Rodal, you insist that you are here doing a study of a closed system composed of a single lump of mass, i.e. that there is no separation (separation in 2 parts from a single part, as would be the case for action/reaction) nor joining ("melting" of 2 parts in a single part, as would be the case for an inelastic collision/aggregation) nor bounce (2 parts exchanging momentum but still being separate before and after such interaction). Do I understand correctly your premise ?

...
All right so we are not using so called relativistic mass mrel, and by avoiding the traps of mrel we are conforming to prescriptions of modern physics teaching. Fine with me. ...
When using the term "lumped mass" I meant it in the sense of a lumped parameter, and did not address in what sense (if it is at all physically possible) this may involve positive and negative masses inside. 

Clearly, unless FinalMass=InitialMass=0, for any deltaV, the equations imply that the rest mass would be changing, from an initial value to a final value, which is in contradiction to the concept of relativistic concept of rest mass that is conserved, and clearly imply that mass-energy cannot be conserved in such a system.

The equations imply that either:

* mass-energy is conserved and hence the only way to satisfy conservation of momentum is with an initial mass = 0 and is always zero (the solution could be due to two masses, as in Bondi/Forward diametric drive, having equal absolute magnitude but opposite sign)

* mass-energy is not conserved: for example because the system composed of a positive mass plus a negative mass is such that the positive mass is immutable but that negative mass can be continuously created (or destroyed)

Therefore the best way to proceed is to continue to address the problem accounting separately for a positive rest mass that is immutable (conservation of mass applies to the positive mass), and separately, for a negative mass that can be continuously created or destroyed. 

Clearly, neither conservation of mass or conservation of energy applies to the negative mass (that can be continuously created or destroyed) in this case: the rest mass of the negative mass is not constant.



If we have a system that can change its rest mass (by creation or destruction of negative mass), it seems to me that it would be best handled in a reference frame for the Initial Configuration, as in Lagrangian coordinates embedded in the material, for example when dealing with very large deformations and very large strains of a body.




« Last Edit: 02/09/2016 01:08 AM by Rodal »

Offline RERT

...
m2 c4 = E2 - p2 c2


....

Thank you for your exposition. The above equation is essentially what I quoted, assuming conservation of energy-momentum, in the three relevant frames.

As you say, QED - Quite Enough Done.

R.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Is the concept of a time-dependent, variable "rest-mass" (mass that would be measured at rest), allowed by the following theories?



1) Special Relativity: NO. The rest mass cannot be a function of the time coordinate, as Lorentz covariance would not be preserved.  Variation of rest mass cannot be due to kinematic (velocity or position) evolution.



2) Einstein's General Relativity (GR): It appears as NO.  It appears that a time-dependent variable mass would give rise to a time-dependent Energy Stress tensor, solely due to the mass variability with time, which is not consistent with Einstein's GR theory of gravitation.   Also it appears that Relativity's Energy-Momentum equation:

m2 c4 = E2 - p2 c2

may prevent general time-dependent variable mass.

Note: I need to review how does Woodward accommodate variable negative mass-energy in his theory (which I understand he states is consistent with Einstein's General Relativity), as it appears to me that there should be an issue with the time-dependent Energy-Stress tensor and with the Energy-Momentum equation in such a theory.



3) Scalar-Tensor theories of Gravity: YES,  through the introduction of a scalar field-dependence, variable mass is allowed:  a time-dependent Energy-Stress tensor is compatible with the theory (because of the scalar field). 



4) Fifth-dimension (or higher-dimensional) field theories:  YES, it allows variable-mass.




CONCLUSION:  Variation of mass represents an additional degree of freedom, and hence field theories that can accommodate this concept are, for example, scalar-tensor theories of gravitation or five-dimensional or higher-dimensional field theories of gravitation that can accommodate this extra degree of freedom.

This is completely in agreement with Minotti's paper on the EM Drive

Scalar-tensor theories and asymmetric resonant cavities
Fernando O. Minotti
Grav. & Cosmol. 19 (2013) 201

that motivated this analysis, as Minotti uses a scalar-tensor theory of gravitation derived from Kaluza-Klein's theory.

It is also in agreement with Dr. White invoking 5th or higher dimensional theories for his Quantum Vacuum hypothesis.

« Last Edit: 02/09/2016 10:53 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Hi,
I did not follow the whole conversation during last few days.
Why is a variable mass related to any force toward the small end of the cavity? If there IS a lower or higher mass related to the EM field inside the resonator what is that meaning? The smaller diameter side of the cavity itself have had a lower mass because it consists of less volume of copper as the larger side.  Also the earth gravity is almost homogeneous over the size of the cavity. How it can generate a thrust in shifting the center of the mass of itself? A slightly other force composition in relation to the gravity field around ok but thrust generation?
IMHO Only a negative mass value would explain thrust generation against the background gravity field at all (repulsive energy). The direction of this force would be against the attractive gravity of the biggest mass nearby: the earth it self.
I think I have to study your the last posts. :)
« Last Edit: 02/09/2016 06:41 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Hi,
I did not follow the whole conversation during last few days.
Why is a variable mass related to any force toward the small end of the cavity? If there IS a lower or higher mass related to the EM field inside the resonator what is that meaning? The smaller diameter side of the cavity itself have had a lower mass because it consists of less volume of copper as the larger side.  Also the earth gravity is almost homogeneous over the size of the cavity. How it can generate a thrust in shifting the center of the mass of itself? A slightly other force composition in relation to the gravity field around ok but thrust generation?
IMHO Only a negative mass value would explain thrust generation against the background gravity field at all (repulsive energy). The direction of this force would be against the attractive gravity of the biggest mass nearby: the earth it self.
I think I have to study your the last posts. :)

1) According to Minotti's paper (quoted in my post above) the force on the EM Drive can be directed towards either end, depending on the electromagnetic mode shape

2) Minotti's paper (using a scalar-tensor unified theory of gravitation and electromagnetism) predicts negative energy-mass (violation of the Weak Energy Condition) in the EM Drive, which would be variable with power, with mode shape and with time (for time-dependence of power input).

3) The force in Minotti's paper is due to coupling with a scalar field in a unified theory of electromagnetism with gravitation, and not due to the Earth's gravitation.  The force of the Earth's gravitation is insignificant compared to the force of a small magnet.

4) Minotti's paper predicts, that for copper wall thickness ~1 mm, the thicker the copper (as long as significantly greater than the skin depth), the greater the force.

5) The force is not directly analogous to a Bondi/Forward diametric drive because the Bondi/Forward diametric drive force is due to two separate bodies without an electromagnetic field.  Instead, the force in Minotti's formulation is due to coupling of the electromagnetic field with a scalar field and it is and most dependent on the electromagnetic mode shape.  I brought the Bondi/Forward analogy only to illustrate how can a zero mass be achieved by a negative mass having same absolute magnitude as the positive mass.

6) The linear version of the unified theory used by Minotti predicts gravitational effects due to the Earth's magnetic field which are unreal.  Minotti briefly goes over the fact that perhaps a nonlinear version would cancel these unreal effects.
« Last Edit: 02/09/2016 10:08 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
..
Quote
...In order to have a non-dimensional scalar field φ of values around unity, in expression
(1) the constant G0 representing Newton gravitational constant is included...
Based on this statement in section 2. of the paper there is a (weak)coupling to the gravity and if so there is also coupling on the earth gravity field for this scalar. And yes gravity is a very weak force in comparison to the other forces inclusive electromagnetism. ::)
I read the paper one or two years ago and it's an impressive idea. I have to read it again to follow your statements but its hard stuff and it will take a while.
That's correct.  Minotti also includes the effect of Earth's Magnetic field.  Perhaps it is best to visualize the small magnitude of the Earth's gravitational force by the fact that the force he derives is dependent on the electromagnetic mode shape, independent of EM Drive orientation with respect to the Earth.
« Last Edit: 02/09/2016 07:34 PM by Rodal »

Offline ThinkerX

  • Full Member
  • **
  • Posts: 283
  • Alaska
  • Liked: 103
  • Likes Given: 55
Ok, trying to wrap what's left of my mind around this:

Quote
4) Minotti's paper predicts, that for copper wall thickness ~1 mm, the thicker the copper (as long as significantly greater than the skin depth), the greater the force.

So, say you have two EM Drive units that are identical, except one has 'skin depth' of 1 mm and the other has 'skin depth' of say 3 mm.  According to this theory, the second device should perform significantly better.  Is that correct?

If so, this appears to be something within the capabilities of our DIY crowd.

But...

1 - would the increased weight of the device with the thicker skin offset the thrust measurements?  (I suspect I am missing something glaringly obvious here.)

2 - Does the entire skin need to be thicker, or just the end plates?   

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Ok, trying to wrap what's left of my mind around this:

Quote
4) Minotti's paper predicts, that for copper wall thickness ~1 mm, the thicker the copper (as long as significantly greater than the skin depth), the greater the force.

So, say you have two EM Drive units that are identical, except one has 'skin depth' of 1 mm and the other has 'skin depth' of say 3 mm.  According to this theory, the second device should perform significantly better.  Is that correct?

If so, this appears to be something within the capabilities of our DIY crowd.

But...

1 - would the increased weight of the device with the thicker skin offset the thrust measurements?  (I suspect I am missing something glaringly obvious here.)

2 - Does the entire skin need to be thicker, or just the end plates?   
Actually Minotti's theory predicts that the force is proportional to the total thickness of the copper as long as it is significantly thicker than the skin depth (and the wall is "thin", not much thicker than 1 mm), for example at ~2 GHz, for copper, the skin depth is about 1 micrometer and the total thickness considered in his example was 1 mm.

Quote from: Minotti
Assuming a cavity with thin walls (but much thicker than the penetration depth ,
in order to the boundary conditions used to be correct) of mass surface density ...
There are no details in the literature as to the precise dimensions of the cavities
used in the experiments, so that an example roughly similar to the overall dimension
reported and with the proportions observed in the published photographs will be used.
Assuming a wall of thickness 1 mm, and a copper mass density of 8.9 × 103 kg/m3, we
have  = 8.9 kg/m2.

According to the theory, if another EM Drive with the same geometry, same copper material and operating at the same frequency and mode shape has a total wall thickness of 2 mm (0.079 in), the force should be two times greater than in the EM Drive with 1 mm (0.039 in) thick. 

In this statement, both EM Drives should have uniform thickness: same thickness for walls as for end plates.

Yes, this should be carefully tested in experiments.
« Last Edit: 02/10/2016 02:48 AM by Rodal »

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
Dr Rodal, thanks for answering my comments about your work around "negative mass" creation and conservation of momentum. It was clear from the equations that you considered only conservation of momentum and didn't consider conservation of total energy, but from early posts (on the main EM drive thread) I had the impression that you were assuming some compatibility with conservation of energy, for instance (bold added by me for emphasis) :

.../...
Variable mass, implying the need for negative mass to self-accelerate, addresses both conservation of momentum and it also addresses conservation of energy.

Energy is conserved, and such a propulsion device is not a free-energy machine, because the greater the speed, the lower the mass.  More on that later...

(The practical problem of course is that up to now, nobody has found experimental evidence of negative mass  ;) )

Also (bold added by me)

.../...
4) The EM Drive is a closed system, in which case the only way I see to conserve momentum-energy for acceleration of the EM Drive is to have creation of negative mass-energy in the EM Drive
.../...

I see two different problems. Possibility of negative mass, that is Weak Energy Condition breaking, is one thing. But modification of rest (invariant) mass in a closed system without a balanced counterpart in changes in kinetic energy is another one, it looks more like conservation of (total) energy breaking, whether the change appears as + or - mass.

As an example of the difference between those 2 hypothesis : we can apply usual SR for conservation of energy and momentum when considering a single particle of mass m1>0 splitting into two particles of mass m2>m1 and m3<0. The hypothesis of the existence of particle of mass<0 doesn't change the equations. But your approach seems to ignore one equation (conservation of energy) and hence gives the system a degree of freedom absent of this initial example. Negative mass deltas is then not specifically implicated, and indeed your solution space also shows positive (unbalanced) delta mass, i.e. actually either positive or negative total energy evolutions.

Your latest answer clarifies this as citing a scalar field or supplementary spatial dimension as required to make sense of such mass variation... I would have a hard time following in detail Minotti (or you following Minotti) on such topic outside of usual SR application, but maybe could understand a few words of how, one way or another,  energy is conserved in the end to say that the approach "conserves momentum-energy" ?

Otherwise how poor SR pedestrians are supposed to make sense of conservation of momentum alone ? It is my understanding than when positing one equation of conservation in SR, whether or not considering other constraints, the equation can have meaning in SR only if it is frame invariant. In other terms, and trying to be mathematically factual, when writing one conservation equation as
function( v1, ..., vk, m1, ..., mk)=0
applying a same arbitrary boost to all velocities
v1'=boost(v1)  ...  vk'=boost(vk)
then we should check that following expression still holds :
function( v1', ..., vk', m1, ..., mk)=0

And I think that your single momentum equation you start from doesn't respect that, so I don't see how it can bear physical meaning in a relativistic context and what is the value of the interpretations you draw from it.

« Last Edit: 02/10/2016 03:34 PM by frobnicat »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Dr Rodal, thanks for answering my comments about your work around "negative mass" creation and conservation of momentum. It was clear from the equations that you considered only conservation of momentum and didn't consider conservation of total energy, but from early posts (on the main EM drive thread) I had the impression that you were assuming some compatibility with conservation of energy, for instance (bold added by me for emphasis) :

.../...
Variable mass, implying the need for negative mass to self-accelerate, addresses both conservation of momentum and it also addresses conservation of energy.

Energy is conserved, and such a propulsion device is not a free-energy machine, because the greater the speed, the lower the mass.  More on that later...

(The practical problem of course is that up to now, nobody has found experimental evidence of negative mass  ;) )

1) The above statement that negative mass addresses the conservation of energy problem is correct in the specific instance of constant zero initial lumped mass.  Specifically, the initial condition of lumped rest mass=0, staying constant  (for example as a result of equal magnitude negative mass as positive mass) answers all your previous posts regarding conservation of energy, and it does so trivially, as the kinetic energy is zero for zero mass, and you cannot use the EM Drive as a generator if it has effective zero inertial mass. (Effectively I know that even photons, with zero rest mass have energy, so if you want you can throw an expression with the Plank constant there, and replace zero kinetic energy with very very small kinetic energy).

If my memory is correct, in your consideration of energy conservation you never considered that the rest mass could be zero.  You assumed (unstated) that the rest mass was greater than zero.  The concept which you addressed in your energy conservation statements, the EM Drive, has been discussed by Dr. White and by Dr. Minotti as involving negative energy-mass, so one should discuss the consequences of such negative energy-mass in considerations of energy conservation, instead of ignoring it, and assuming as in your considerations, that the energy-mass was positive, and constant.  In other words, your energy considerations for the EM Drive,  ignore the premise of these authors.



2) The argument that <<nobody has found experimental evidence of negative mass >> is a non-starter in this discussion because: eminent physicists like Kip Thorne, Hawking and others have discussed negative mass (to stabilize wormholes for example), so there is no "shame" in theoretically considering negative mass.  As to experimental evidence, whether Casimir effect and other types of negative energy can indeed be considered experimental evidence of negative energy is up for discussion, but again eminent physicists (for example in discussion of stabilization of wormholes posit the Casimir energy as the means for the negative energy).

And again, in your considerations of conservation of energy you are dealing with a concept where some authors (Dr. White and Dr. Minotti) explicitly state that they are considering negative energy !

Therefore, since your conservation of energy considerations ignore negative energy-mass, your considerations of energy conservation seem to be inapplicable to the concepts advanced by Dr. White and Dr. Minotti.
« Last Edit: 02/10/2016 06:16 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
I see two different problems. Possibility of negative mass, that is Weak Energy Condition breaking, is one thing. But modification of rest (invariant) mass in a closed system without a balanced counterpart in changes in kinetic energy is another one, it looks more like conservation of (total) energy breaking, whether the change appears as + or - mass.

As an example of the difference between those 2 hypothesis : we can apply usual SR for conservation of energy and momentum when considering a single particle of mass m1>0 splitting into two particles of mass m2>m1 and m3<0. The hypothesis of the existence of particle of mass<0 doesn't change the equations. But your approach seems to ignore one equation (conservation of energy) and hence gives the system a degree of freedom absent of this initial example. Negative mass deltas is then not specifically implicated, and indeed your solution space also shows positive (unbalanced) delta mass, i.e. actually either positive or negative total energy evolutions.

Your latest answer clarifies this as citing a scalar field or supplementary spatial dimension as required to make sense of such mass variation... I would have a hard time following in detail Minotti (or you following Minotti) on such topic outside of usual SR application, but maybe could understand a few words of how, one way or another,  energy is conserved in the end to say that the approach "conserves momentum-energy" ?

...
In the above post: http://forum.nasaspaceflight.com/index.php?topic=39214.msg1489632#msg1489632, I already addressed the fact that I think that variable rest mass is incompatible with Special Relativity, and so it is perplexing why you are bringing Special Relativity (assuming this is what you mean by "SR") into the picture again, as if demanding that Special Relativity should be obeyed.

I thought this was clear:

Is the concept of a time-dependent, variable "rest-mass" (mass that would be measured at rest), allowed by the following theories?



1) Special Relativity: NO. The rest mass cannot be a function of the time coordinate, as Lorentz covariance would not be preserved.  Variation of rest mass cannot be due to kinematic (velocity or position) evolution.

...

As to conservation of energy, I thought that this was also clear:

Is the concept of a time-dependent, variable "rest-mass" (mass that would be measured at rest), allowed by the following theories?

...
2) Einstein's General Relativity (GR): It appears as NO.  It appears that a time-dependent variable mass would give rise to a time-dependent Energy Stress tensor, solely due to the mass variability with time, which is not consistent with Einstein's GR theory of gravitation.   Also it appears that Relativity's Energy-Momentum equation:

m2 c4 = E2 - p2 c2

may prevent general time-dependent variable mass.

Note: I need to review how does Woodward accommodate variable negative mass-energy in his theory (which I understand he states is consistent with Einstein's General Relativity), as it appears to me that there should be an issue with the time-dependent Energy-Stress tensor and with the Energy-Momentum equation in such a theory.

...

Also it is perplexing why you keep bringing up conservation of energy for a variable mass problem in a closed system, when it was stated repeatedly that any such variable mass in a closed system to make sense it would involve creation (or destruction) of negative energy-mass.   

As to how variable mass can be addressed in scalar-tensor theories of gravitation, this has already been done prior to my posting.  I'll try to find the references...

In your discussion of conservation of energy in the EM Drive: you disregard the fact that authors like Dr. White and Dr. Minotti posit a solution that involves negative energy-mass, and instead you insist in considerations of energy conservation that involve the assumption of constant positive energy-mass, an assumption in contradiction with the assumptions of the authors of the concept (EM Drive) you are addressing in your consideration.

Rather than insisting on obeying Special Relativity and constant energy-mass, when discussing the EM Drive concept, it seems to me that it is better for me (and you too) to address the fact that the authors (Dr. White and Dr. Minotti) posit negative energy-mass, rather than disregarding the author's assumptions.  :)
« Last Edit: 02/10/2016 10:44 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
The following references using a particular scalar-tensor theory (https://en.wikipedia.org/wiki/Brans%E2%80%93Dicke_theory) (not the particular theory used by Minotti) treat the mass, instead of being constant, as a space-time dependent scalar:


Observable effects of a scalar gravitational field in a binary pulsar
Eardley, D. M.
Astrophysical Journal, vol. 196, Mar. 1, 1975, pt. 2, p. L59-L62.
http://adsabs.harvard.edu/full/1975ApJ...196L..59E

Gravitational radiation, close binary systems, and the Brans-Dicke theory of gravity
Will, C. M. & Zaglauer, H. W.
Astrophysical Journal, Part 1 (ISSN 0004-637X), vol. 346, Nov. 1, 1989, p. 366-377.
http://adsabs.harvard.edu/full/1989ApJ...346..366W

Relativistic Gravity in the Solar System. I. Effect of an Anisotropic Gravitational Mass on the Earth-Moon Distance
Will, C. M.
Astrophysical Journal, vol. 165, p.409
http://articles.adsabs.harvard.edu//full/1971ApJ...165..409W/0000411.000.html




Also see section 7-7.1 of "the famous ADM paper", where the rest mass m is treated as varying slowly (*) with time:

R. Arnowitt, S. Deser, C. W. Misner, "The dynamics of general relativity," in: L. Witten, Gravitation: An Introduction to Current Research, (Wiley, New York, 1962) pp. 227-265.
http://arxiv.org/abs/gr-qc/0405109

(*) in cosmological time !   :)
« Last Edit: 02/10/2016 07:24 PM by Rodal »

Offline rfmwguy

  • EmDrive Builder (retired)
  • Senior Member
  • *****
  • Posts: 2164
  • Liked: 2674
  • Likes Given: 1124
Doc, the invitation extends to your crew here as well if anyone needs a lot of emdrive file storage space. Forgot to mention that on the other thread. Sorry for the oversight and interruption.

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
Dr Rodal, thanks for answering my comments about your work around "negative mass" creation and conservation of momentum. It was clear from the equations that you considered only conservation of momentum and didn't consider conservation of total energy, but from early posts (on the main EM drive thread) I had the impression that you were assuming some compatibility with conservation of energy, for instance (bold added by me for emphasis) :

.../...
Variable mass, implying the need for negative mass to self-accelerate, addresses both conservation of momentum and it also addresses conservation of energy.

Energy is conserved, and such a propulsion device is not a free-energy machine, because the greater the speed, the lower the mass.  More on that later...

(The practical problem of course is that up to now, nobody has found experimental evidence of negative mass  ;) )

1) The above statement that negative mass addresses the conservation of energy problem is correct in the specific instance of constant zero initial lumped mass. 

Yes, on that we agree. Mathematically a 0 mass lump has always 0 kinetic energy and needs 0 Force to accelerate or change velocity. But there is no EM drive related experiment claiming or hinting of a 0 initial mass of device. And more importantly all EM drive related experiment claiming positive anomalies record effective force≠0 on some massive object (mass>0). This is this force that raises problem of conservation of energy (relative to what is measured as explicit input power) for something that is not supposed to react on the walls of the lab or other object nearby like earth (through geomagnetic field for instance) if it is to be claimed as "anomalous".

Quote
Specifically, the initial condition of lumped rest mass=0, staying constant  (for example as a result of equal magnitude negative mass as positive mass) answers all your previous posts regarding conservation of energy, and it does so trivially, as the kinetic energy is zero for zero mass, and you cannot use the EM Drive as a generator if it has effective zero inertial mass. (Effectively I know that even photons, with zero rest mass have energy, so if you want you can throw an expression with the Plank constant there, and replace zero kinetic energy with very very small kinetic energy).

"The initial condition of lumped rest mass=0" is irrelevant for the devices tested so far. But, for sake of clarifying this hypothetical limit case, even if it was the case that we had a device with rest mass=0, if at the cost of 1kW injected power it is to give more than 3.33µN of effective force against some massive object and not reacting on the lab's walls (that is the claim of the experiments), then such 0 rest mass device answers none of my previous post regarding conservation of energy in conditions of stationary velocity and stationary thrust, as the kinetic energy plays absolutely no role in such argument, and I could still use an EM Drive as a generator even if it has effective zero inertial mass .

Quote
If my memory is correct, in your consideration of energy conservation you never considered that the rest mass could be zero. 

No because when considering energy conservation in the context of stationary velocity and stationary thrust this is irrelevant as no change in kinetic energy is involved.

Quote
You assumed (unstated) that the rest mass was greater than zero. 

I had no incentive to assume such, since it won't change the outcome. The device could be of positive, 0, or negative mass, the argument depends solely on a stationary frame invariant effect of given thrust/power>3.33µN/kW. Obviously the stationary requirement implies constant mass, as anything else than constant mass of the device would trivially imply exhaust. The device is claimed to be propellantless.

Quote
The concept which you addressed in your energy conservation statements, the EM Drive, has been discussed by Dr. White and by Dr. Minotti as involving negative energy-mass, so one should discuss the consequences of such negative energy-mass in considerations of energy conservation, instead of ignoring it, and assuming as in your considerations, that the energy-mass was positive, and constant.  In other words, your energy considerations for the EM Drive,  ignore the premise of these authors.

Well, on those threads about EM drive I was the first to mention the possibility of a tachyon exhaust to give some grounding (within known frameworks) of self powered propulsion better than 3.33µN/kW. I'm not afraid of imaginary rest mass, I'm ready to hear about negative energy or negative mass. But this is different than plain breaking of conservation of total energy.  3 + -2 = 1  fine, but 3 = 1 where are we heading ?

Quote
2) The argument that <<nobody has found experimental evidence of negative mass >> is a non-starter in this discussion ...

Just to be clear, while I tend to agree, this is not something I wrote nor put forward in my remarks, for the reason I'm not objecting that negative mass is a somehow legit hypothesis worth of discussion.

Quote
... because: eminent physicists like Kip Thorne, Hawking and others have discussed negative mass (to stabilize wormholes for example), so there is no "shame" in theoretically considering negative mass.  As to experimental evidence, whether Casimir effect and other types of negative energy can indeed be considered experimental evidence of negative energy is up for discussion, but again eminent physicists (for example in discussion of stabilization of wormholes posit the Casimir energy as the means for the negative energy).

Out of my domain of substantiated opinions. Serious names think the concept of negative mass or negative energy deserves a place in theoretical physics, fine, I don't object (how could I ?).

Quote
And again, in your considerations of conservation of energy you are dealing with a concept where some authors (Dr. White and Dr. Minotti) explicitly state that they are considering negative energy !

Negative energy is different from creation or annihilation of energy. I've become extremely skeptical of dr. White's views on advanced topics since I saw how he treated (or let treat by a collaborator, since it's "only" cosigned White) mundane classical action reaction system in an annex dealing with the crucial aspect of conservation of energy of propellantless device.

My "problem" with White's take on that, and I think with what you are embarking into with γ1m1v1=γ2m2v2 (m1≠m2), is not that there can be a m<0 or Δm<0 or E<0 or ΔE<0 term somewhere, it is that we can't account properly for some (not necessarily restricted to basic special relativity) definition of total energy Etot such that we can guarantee there is never Etot(t+Δt)≠Etot(t) and this is valid for all observers.

Quote
Therefore, since your conservation of energy considerations ignore negative energy-mass, your considerations of energy conservation seem to be inapplicable to the concepts advanced by Dr. White and Dr. Minotti.

My considerations of energy conservation, since I made them under the specific situation of stationary thrust and stationary velocity, quite don't care about the sign of energy or mass terms. They care about some lack of equality, the fact that so far we are not aware of a frame agnostic definition of total Energy for the whole system, in a deep space context (not reacting on lab's walls), such that we don't see Etot(t+Δt)≠Etot(t). While it's true I spoke mainly about apparent excess of energy, the same idea (stationary thrust at stationary velocity) can be applied to make apparent complete wipeout of some energy content, that is as problematic.

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
...
I see two different problems. Possibility of negative mass, that is Weak Energy Condition breaking, is one thing. But modification of rest (invariant) mass in a closed system without a balanced counterpart in changes in kinetic energy is another one, it looks more like conservation of (total) energy breaking, whether the change appears as + or - mass.

As an example of the difference between those 2 hypothesis : we can apply usual SR for conservation of energy and momentum when considering a single particle of mass m1>0 splitting into two particles of mass m2>m1 and m3<0. The hypothesis of the existence of particle of mass<0 doesn't change the equations. But your approach seems to ignore one equation (conservation of energy) and hence gives the system a degree of freedom absent of this initial example. Negative mass deltas is then not specifically implicated, and indeed your solution space also shows positive (unbalanced) delta mass, i.e. actually either positive or negative total energy evolutions.

Your latest answer clarifies this as citing a scalar field or supplementary spatial dimension as required to make sense of such mass variation... I would have a hard time following in detail Minotti (or you following Minotti) on such topic outside of usual SR application, but maybe could understand a few words of how, one way or another,  energy is conserved in the end to say that the approach "conserves momentum-energy" ?

...
In the above post: http://forum.nasaspaceflight.com/index.php?topic=39214.msg1489632#msg1489632, I already addressed the fact that I think that variable rest mass is incompatible with Special Relativity, and so it is perplexing why you are bringing Special Relativity (assuming this is what you mean by "SR") into the picture again, as if demanding that Special Relativity should be obeyed.

I thought this was clear:

Is the concept of a time-dependent, variable "rest-mass" (mass that would be measured at rest), allowed by the following theories?



1) Special Relativity: NO. The rest mass cannot be a function of the time coordinate, as Lorentz covariance would not be preserved.  Variation of rest mass cannot be due to kinematic (velocity or position) evolution.

...

Yes, and that was clarifying, thanks. Thing is, you are starting a whole analysis by an expression of momentum conservation borrowed from SR (Special Relativity) and then apply it in a context where it is no longer frame invariant (in you own terms "Lorentz covariance would not be preserved") : what is the value left of a such a relativistic equation when it is no longer frame invariant ? In the (respectable) thought experiments you are developing here, I don't see what this equation brings, when saying that total energy can vary (which is outside SR), more than simply stating that momentum can vary directly (which is not more nor less outside SR). You end up with an analysis that doesn't enforce some form of conservation of total energy at all (it is not conserved) and that conserves momentum but restricted to only one privileged frame. That's not much left!

SR doesn't prevent Newtonian mechanics to be obeyed for systems with relative velocities <<c. GR doesn't prevent SR to be obeyed for systems in ~flat space-time. So if it does exist situations of variation of rest mass that cannot be due to SR kinematic evolution but due to such exotica as inflaton scalar field or 5th dimension I still would like (expect ? require ?) such situations to be compatible with central SR aspect of frame invariance, or else that well motivated non SR equations are used from scratch (and that such non SR equation do decay to SR ones as a limit case).

Quote
As to conservation of energy, I thought that this was also clear:

Is the concept of a time-dependent, variable "rest-mass" (mass that would be measured at rest), allowed by the following theories?

...
2) Einstein's General Relativity (GR): It appears as NO.  It appears that a time-dependent variable mass would give rise to a time-dependent Energy Stress tensor, solely due to the mass variability with time, which is not consistent with Einstein's GR theory of gravitation.   Also it appears that Relativity's Energy-Momentum equation:

m2 c4 = E2 - p2 c2

may prevent general time-dependent variable mass.

Note: I need to review how does Woodward accommodate variable negative mass-energy in his theory (which I understand he states is consistent with Einstein's General Relativity), as it appears to me that there should be an issue with the time-dependent Energy-Stress tensor and with the Energy-Momentum equation in such a theory.

...

Also it is perplexing why you keep bringing up conservation of energy for a variable mass problem in a closed system, when it was stated repeatedly that any such variable mass in a closed system to make sense it would involve creation (or destruction) of negative energy-mass.   


Because for me "closed system" and "variation of total energy" is a contradiction in terms. Beyond that I can't quite follow you on GR ground.

Quote
As to how variable mass can be addressed in scalar-tensor theories of gravitation, this has already been done prior to my posting.  I'll try to find the references...

In your discussion of conservation of energy in the EM Drive: you disregard the fact that authors like Dr. White and Dr. Minotti posit a solution that involves negative energy-mass, and instead you insist in considerations of energy conservation that involve the assumption of constant positive energy-mass, an assumption in contradiction with the assumptions of the authors of the concept (EM Drive) you are addressing in your consideration.

Rather than insisting on obeying Special Relativity and constant energy-mass, when discussing the EM Drive concept, it seems to me that it is better for me (and you too) to address the fact that the authors (Dr. White and Dr. Minotti) posit negative energy-mass, rather than disregarding the author's assumptions.  :)

Well, maybe it's better I leave that altogether. I certainly don't have time to dig seriously all those papers.
Just so that you understand my motives, that are not simple trolling (as I'm sure you know) but genuine perplexity (probably as much as your perplexity at my perplexity) :
Your equation for deltamass as a function of deltaV and Vinitial is exactly the same as that we would obtain if we were to consider in SR (with both CoE and CoM) an aggregation of a "lump" of mass Minitial merging at velocity Vinitial with an auxiliary lump of mass Mauxiliary at rest to yield an aggregated lump of mass Mfinal at velocity Vinitial+deltaV. So to me your "variation of rest mass" for a single lump is indistinguishable from a merge event between two lumps, hence qualifying the hypothetical varying mass single lump as "closed system" illusory.

A bit like if you were to invoke spontaneous mass creation from the fifth dimension to explain a turbo jet thrust, and you end up with same equation as when deriving it as air breathing. Albeit the "closed system" equation by spontaneous mass creation from fifth dimension works in only one privileged frame and breaks all relativistic frameworks otherwise (Galilean for a start), while the natural open air-breathing approach gives same prediction in the reference frame privileged before, but is also correct in all frames, and fully compatible with tried and proven frameworks. This is a lot of trouble just for asserting that it could conceivably be a closed system when, given the phenomenology of the considered equations, all indicates that it is open.

And this has nothing to do with signs of mass or deltas of mass, until we are speaking of the efficiency, and I agree that specifically negative energies do appear when dealing with self powered >3.33µN/kW because of the need to dump energetic debt (again : open system, not closed)


Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...This is a lot of trouble just for asserting that it could conceivably be a closed system when, given the phenomenology of the considered equations, all indicates that it is open....
My God,  I posted with bold red signs and a moving banner, something titled "Under construction" and now you are stating that I am asserting that the EM Drive is a closed system that works on spontaneous and continuous creation of negative mass-energy  ???  ::)

Considering what are the implications if the EM Drive would be a closed-system does not at all mean that one is "asserting" that it is  :)

Such considerations are par for the course for anybody involved in R&D ! 

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
....

My considerations of energy conservation, since I made them under the specific situation of stationary thrust and stationary velocity, quite don't care about the sign of energy or mass terms. They care about some lack of equality, the fact that so far we are not aware of a frame agnostic definition of total Energy for the whole system, in a deep space context (not reacting on lab's walls), such that we don't see Etot(t+Δt)≠Etot(t). While it's true I spoke mainly about apparent excess of energy, the same idea (stationary thrust at stationary velocity) can be applied to make apparent complete wipeout of some energy content, that is as problematic.
Well, you have chosen to ignore Dr. White's and Dr. Minotti's theory for the EM Drive regarding negative energy-mass, I have chosen to address its implications.  Regarding

Quote from: Frobnicat
so far we are not aware of a frame agnostic definition of total Energy for the whole system, in a deep space context (not reacting on lab's walls), such that we don't see Etot(t+Δt)≠Etot(t)
, see this discussion of a more familiar concept, dark energy, and how this is addressed: 

http://www.preposterousuniverse.com/blog/2010/02/22/energy-is-not-conserved/

I have not seen these issues addressed in Prof. Woodward's discussions of negative energy-mass, please let me know if you are aware of any such discussion. 

Offline frobnicat

  • Full Member
  • ****
  • Posts: 518
  • Liked: 500
  • Likes Given: 151
...This is a lot of trouble just for asserting that it could conceivably be a closed system when, given the phenomenology of the considered equations, all indicates that it is open....
My God,  I posted with bold red signs and a moving banner, something titled "Under construction" and now you are stating that I am asserting that the EM Drive is a closed system that works on spontaneous and continuous creation of negative mass-energy  ???  ::)

Considering what are the implications if the EM Drive would be a closed-system does not at all mean that one is "asserting" that it is  :)

Such considerations are par for the course for anybody involved in R&D !

I'm not stating that you are asserting that the EM Drive is a closed system, I say that this assertion (that your thought experiment and derivation imply, as an hypothesis) would be a lot of trouble... You were clear enough about the speculative nature of all that. Sorry if I'm cluttering this thread, hope this can be of use as possibly representative of the kind of indignation you'll have to face if you were to present that to a wider audience, and help toward a more explicit exposition of the premises. I'll be back after construction, to haunt the basement  ;D

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...This is a lot of trouble just for asserting that it could conceivably be a closed system when, given the phenomenology of the considered equations, all indicates that it is open....
My God,  I posted with bold red signs and a moving banner, something titled "Under construction" and now you are stating that I am asserting that the EM Drive is a closed system that works on spontaneous and continuous creation of negative mass-energy  ???  ::)

Considering what are the implications if the EM Drive would be a closed-system does not at all mean that one is "asserting" that it is  :)

Such considerations are par for the course for anybody involved in R&D !

I'm not stating that you are asserting that the EM Drive is a closed system, I say that this assertion (that your thought experiment and derivation imply, as an hypothesis) would be a lot of trouble... You were clear enough about the speculative nature of all that. Sorry if I'm cluttering this thread, hope this can be of use as possibly representative of the kind of indignation you'll have to face if you were to present that to a wider audience, and help toward a more explicit exposition of the premises. I'll be back after construction, to haunt the basement  ;D

I'm glad we understand each other that this is a speculative analysis to examine what are the implications of the EM Drive being a closed system.  Who knows, sometimes useful things come out of such speculations :)


Thanks to you and everybody dedicating their valuable time to post comments: they have been, are, and always will be most welcome, and appreciated ;)
« Last Edit: 02/11/2016 01:18 AM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I have tried to handle feko lite but I run into several problems due to the strong limitation of the trial version..
Till now the only working model is a TE011 version excited from the large base. For all later modifications the program stops with limitation notification or solving problems because of coarse port-mesh and similar issue.   :-\
Nice try but less useful in this configuration. For sure a full version of the program would be very nice.  ::)
« Last Edit: 02/19/2016 06:49 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
OK just spending a little more time on FEKO LITE.
I remove the waveguide excitation (used before) and instead of this I currently use 2 dipoles near the big base. The mesh is still, aaah yes, I dont like to talk about... >:( :-\ :-\ The version is restricted to 500 volume elements, these are 450.
Nevertheless I could identify the same mode TE011 again near 2.4GHz. (pic 1&2)
After that I used my spreadsheet and try to use the following dimensions:
SD=150
BD=300
L=270
TE013 has to be nearby ~2.4GHz (using the bad mesh it may be close enought)
I am a little perplex for the field pattern delivered by FEKO LITE ??? for this last run. Looks more like TE012 (the two last pics below) :( (Look to my avatar pic, this looks like TE013(!) created via EMPro full version)
 
https://forum.nasaspaceflight.com/index.php?topic=37642.msg1412912#msg1412912
« Last Edit: 02/18/2016 08:53 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Spreadsheet results for this dimensions using different calculation methods:
« Last Edit: 02/18/2016 08:41 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I got it  :)
« Last Edit: 02/19/2016 08:39 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
At the moment I tried to place dielectric material at the small base.
The excitation is still realized using two dipoles near the large base.
These dipoles are no ports where S-parameter can be extracted by FEKO Lite, only the power level can be displayed and due to the trial version only 10 data points over the chosen frequency range.
The visible mode is still TE013 in both cases shown in the pics below.

EDIT:
Poynting vector fields can also be displayed. (I don't know if this is an average over a full cycle since I am using the dipoles.)* :)

*I am familiar with the the possibilities of EMPro. There an endless animation can be shown over a full cycle using any fixed frequency at least for the evolution of the E and H fields, but I never try to show the Pointing fields in EMPro. I will look for it if I have some time.
« Last Edit: 02/21/2016 08:31 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Instead of the permittivity of 3.2 I tried to redefine the inlay using the same values like before but now for permeability of the material. The value seems unnatural but this is for comparison with the run before only.
The resulting field pattern is quite different to the last calculation, especially the amplitudes of the single lobes.
« Last Edit: 02/26/2016 07:10 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I think about the "better as large as possible" condition to get very high Q value. Using the linear Q approximation it follows that the Q factor will be larger also the higher the mode indices be. Using well known dimensions, of course at higher frequencies, for the Brady frustum I get a Q value at ~24.5GHz of ~730000 for TE45(34) [the mode number is only used as an example]. The total number may be wrong** but the context still holds. Now based on this I understand the usage of whispering modes gallery in the Cannae thruster much better, it simply leads to higher Q values. It's not only the volume of the cavity, in general frequency volume and mode number are of interest*.
It's simply another solution/expression of the linearity of the maxwell equations while scaling some factors.


*   Beside other effects like εr & tanδ of the volume material and the cavity wall conductivity (σ) and so on.

** As discussed in this thread the total Q number for higher than the fundamental modes don't holds using a simple approximation formula. https://forum.nasaspaceflight.com/index.php?topic=39214.msg1476709#msg1476709

« Last Edit: 03/25/2016 09:23 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
I think about the "better as large as possible" condition to get very high Q value. Using the linear Q approximation it follows that the Q factor will be larger also the higher the mode indices be. Using well known dimensions, of course at higher frequencies, for the Brady frustum I get a Q value at ~24.5GHz of ~730000 for TE45(34) [the mode number is only used as an example]. The total number may be wrong** but the context still holds. Now based on this I understand the usage of whispering modes gallery in the Cannae thruster much better, it simply leads to higher Q values. It's not only the volume of the cavity, in general frequency volume and mode number are of interest*.
It's simply another solution/expression of the linearity of the maxwell equations while scaling some factors.


*   Beside other effects like εr & tanδ of the volume material and the cavity wall conductivity (σ) and so on.

** As discussed in this thread the total Q number for higher than the fundamental modes don't holds using a simple approximation formula. https://forum.nasaspaceflight.com/index.php?topic=39214.msg1476709#msg1476709

Very interesting. 

How did you calculate the Q for  TE45?  Is m=4, n=5 ? What is p?

Did you use FEKO?

I would like to have an analytical result showing this to check it and understand it.  Essentially one needs a mode shape that maximizes the following quantity:

∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA


Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I think about the "better as large as possible" condition to get very high Q value. Using the linear Q approximation it follows that the Q factor will be larger also the higher the mode indices be. Using well known dimensions, of course at higher frequencies, for the Brady frustum I get a Q value at ~24.5GHz of ~730000 for TE45(34) [the mode number is only used as an example]. The total number may be wrong** but the context still holds. Now based on this I understand the usage of whispering modes gallery in the Cannae thruster much better, it simply leads to higher Q values. It's not only the volume of the cavity, in general frequency volume and mode number are of interest*.
It's simply another solution/expression of the linearity of the maxwell equations while scaling some factors.


*   Beside other effects like εr & tanδ of the volume material and the cavity wall conductivity (σ) and so on.

** As discussed in this thread the total Q number for higher than the fundamental modes don't holds using a simple approximation formula. https://forum.nasaspaceflight.com/index.php?topic=39214.msg1476709#msg1476709

Very interesting. 

How did you calculate the Q for  TE45?  Is m=4, n=5 ? What is p?

Did you use FEKO?

I would like to have an analytical result showing this to check it and understand it.  Essentially one needs a mode shape that maximizes the following quantity:

∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA
I used my spreadsheet and formula discussed in page 2 of this thread. https://forum.nasaspaceflight.com/index.php?topic=39214.msg1476709#msg1476709
 I simply picked the Bessel value of TE45 and choose the p-number to mach 24.5GHz, for the given geometry its p=34
(34 half wavelength in axial-direction).

I also tried this for TE01p with p=37, fres≈24.3GHz, the resulting Q is 339838.

The cavity is simple larger in relation of the wavelength. So instead to use a low order mode at low frequency which leads to a large cavity one also can store a equivalent high energy using higher frequency and mode number while keeping the cavity volume constant. This describes why its more efficient to use TE013 instead of TE011 for example. The energy desity depends on the frequency/wavelength.
I fully agree to the volume to surface relation!

I can not confirm this with FEKO because I have only the trial version.
Much more than the possible 500 volume elements would be necessary for 10 times higher frequency and this huge mode order.

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
If I have the time I could check this using EMPro within the next week's. Using the eigen resonance solver the resonant frequency and the Q will be displayed directly for each mode.

For now I only have the results of the spreadsheet with the approximation formulas.
« Last Edit: 03/26/2016 02:52 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
If I have the time I could check this using EMPro within the next week's. Using the eigen resonance solver the resonant frequency and the Q will be displayed directly for each mode.

For now I only have the results of the spreadsheet with the approximation formulas.
It would be great if you check this with EMPro.  It seems to me that to maximize Q one wants to maximize

∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA

this means minimizing  ∫ ElectromagneticEnergy dA

while maximizing

∫ElectromagneticEnergy dV

this means a mode shape that would have most of the high electromagnetic energy in the interior volume of the cavity instead of the exterior of the cavity near the metal
« Last Edit: 03/26/2016 03:03 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Dr. Rodal is it possible that the displacement of a large number of electrons inside the copper walls could cause a temporal displacement of the cavity/ force on the cavity? Since electrons will driven by thermal currents, they travel away from the heat source into the direction of the heatsink (cooler end of the cavity) due to their statistical velocity**. The center of mass may also change due to this effect. In the case of the (comsol)pic below the electrons will driven in the direction of the small end while the displacement of the cavity itself would be forwards into the direction of the big end diameter. Of course this effect is reversible and holds only as long as a thermal equilibrium is re-established.

https://en.wikipedia.org/wiki/Thermophoresis
http://aerosols.wustl.edu/Education/Thermophoresis/section02.html


**Some years ago I measured this kind of termal driven electron current using a brass rod and a very high sensitive fluxgate sensor system. One end of the rod was heated by a flame for a few seconds the other not. It was easy to measure the magnetic field of this current and its direction/sign dependent on which end of the rod was heated (while the position of the sensor and the rod was static all time) . When no temperatur gradient was present, no field could be detected.
« Last Edit: 12/18/2016 05:09 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Dr. Rodal is it possible that the displacement of a large number of electrons inside the copper walls could cause a temporal displacement of the cavity/ force on the cavity? Since electrons will driven by thermal currents, they travel away from the heat source into the direction of the heatsink (cooler end of the cavity) due to their statistical velocity**. The center of mass may also change due to this effect. In the case of the (comsol)pic below the electrons will driven in the direction of the small end while the displacement of the cavity itself would be forwards into the direction of the big end diameter. Of course this effect is reversible and holds only as long as a thermal equilibrium is re-established.

https://en.wikipedia.org/wiki/Thermophoresis
http://aerosols.wustl.edu/Education/Thermophoresis/section02.html


**Some years ago I measured this kind of termal driven electron current using a brass rod and a very high sensitive fluxgate sensor system. One end of the rod was heated by a flame for a few seconds the other not. It was easy to measure the magnetic field of this current and its direction/sign dependent on which end of the rod was heated (while the position of the sensor and the rod was static all time) . When no temperatur gradient was present, no field could be detected.

Interesting.  For resonant cavities used in particle accelerators, there is also the phenomenon of multipaction that has long been known to be a problem for operation of resonant cavities.    Multipaction is described as an electron resonance effect that occurs when radio frequency fields accelerate electrons in a vacuum and cause them to impact with a surface, which depending on its energy, release one or more electrons, and hence the effect cascades:

https://en.wikipedia.org/wiki/Multipactor_effect

It is not clear to me how these effects would result in self-acceleration of the cavity, since they are all internal effects and no mass or energy is expelled out of the cavity.  So, the mass and energy of the cavity doesn't change, does it?

 Is there a way to explain self-acceleration ?
« Last Edit: 04/01/2016 01:32 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Dr. Rodal is it possible that the displacement of a large number of electrons inside the copper walls could cause a temporal displacement of the cavity/ force on the cavity? Since electrons will driven by thermal currents, they travel away from the heat source into the direction of the heatsink (cooler end of the cavity) due to their statistical velocity**. The center of mass may also change due to this effect. In the case of the (comsol)pic below the electrons will driven in the direction of the small end while the displacement of the cavity itself would be forwards into the direction of the big end diameter. Of course this effect is reversible and holds only as long as a thermal equilibrium is re-established.

https://en.wikipedia.org/wiki/Thermophoresis
http://aerosols.wustl.edu/Education/Thermophoresis/section02.html


**Some years ago I measured this kind of termal driven electron current using a brass rod and a very high sensitive fluxgate sensor system. One end of the rod was heated by a flame for a few seconds the other not. It was easy to measure the magnetic field of this current and its direction/sign dependent on which end of the rod was heated (while the position of the sensor and the rod was static all time) . When no temperatur gradient was present, no field could be detected.

Interesting.  For resonant cavities used in particle accelerators, there is also the phenomenon of multipaction that has long been known to be a problem for operation of resonant cavities.    Multipaction is described as an electron resonance effect that occurs when radio frequency fields accelerate electrons in a vacuum and cause them to impact with a surface, which depending on its energy, release one or more electrons, and hence the effect cascades:

https://en.wikipedia.org/wiki/Multipactor_effect

It is not clear to me how these effects would result in self-acceleration of the cavity, since they are all internal effects and no mass or energy is expelled out of the cavity.  So, the mass and energy of the cavity doesn't change, does it?

 Is there a way to explain self-acceleration ?
No question the effect described in my last post is not useful to produce thrust in this sense, but it could lead to false interpretation of some displacement of an torsion pendulum for example. Whereas at the moment I have no idea about the amount of involved thermal driven electrons and if the effect would be big enough or just lower order. One could measure how much electrical DC current is needed to generate the same power of the H field as in a well defined thermal driven case to get an idea of the numbers?
Again I dont think there is realy thrust based on such effects without something can escape from the cavity through the walls. I was simply talking about a temporary displacement of electrons from one end of the cavity to the other and the related backreaction nothing else.
BTW this could explain why some experiments (with modes like TE013) generates a force in the opposite direction, for this mode the heating is more effective at the small end due to the larger field strength and wall currents.
Pure speculation. ;)


The effect you describe is also interesting. I would be surprised if the impacted electrons could escape   through the relative thick metal walls.
« Last Edit: 04/01/2016 06:41 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Q measurement of a resonator with matched antenna impedance

In the orginal EMDrive forum at NSF there where discussions how to measure the Q in general.

This sample measurement shows the difference of the correct Q measurement in contrast to a bad one. The DUT is a conical cavity resonator made of steel with a conductivity of ~1.4e6 S/m, the mode is TE011 @ 24.192318GHz, the antenna is optimized to excite this mode shape and the antenna feed contains adjusting elements for impedance match. The resonator contained a thin dielectric plate at the small base. The following example shows a S11 measurement.
Rather than measure the 3dB above from the peak of the resonance curve which leads to absurd high Q value (pic 3) one has to measure the -3dB loss from the maximal reflected energie level down*. Pic 1 shows this, pic 2 confirms the measurement conditions in the complex plane. This kind of go ahead is consistent as be shown in many textbooks.
(Nasa report also shows how to measure the correct value!)
Marker 1 shows the center of the peak
Marker 5 & 6 marks the -3dB border measured against the reference Marker 4 at -0.9dB.
Q=f/df
Q=fcenter/fhigh - flow

The result is a Q of ~1055 which is a natural value for a steel cavity resonator in this frequency regime with dielectric insert.
(In addition the result is backed by EMPro calculations.)

In contrast to the upper result while using measurements like in the last pic, 3dB above the peak the calculation leads to a Q of ~22000 what ist quite unrealistic for this frequency, mode shape and cavity material.

*Please note that this simple metode is usable under impedance matched conditions, for great over- or under-coupling the situation is more complicated and the calculation is more complex. Here the simple -3dB approximation condition is not longer useful.
For more details see here:
http://www-elsa.physik.uni-bonn.de/Lehrveranstaltungen/FP-E106/E106-Erlaeuterungen.pdf

EDIT
https://forum.nasaspaceflight.com/index.php?topic=41732.msg1635542#msg1635542
« Last Edit: 01/26/2017 08:15 PM by X_RaY »

Offline TheTraveller

Q measurement of a resonator with matched antenna impedance

In the orginal EMDrive forum at NSF there where discussions how to measure the Q in general.

This sample measurement shows the difference of the correct Q measurement in contrast to a bad one. The DUT is a conical cavity resonator made of steel with a conductivity of ~1.4e6 S/m, the mode is TE011 @ 24.192318GHz, the antenna is optimized to excite this mode shape and the antenna feed contains adjusting elements for impedance match. The following example shows a S11 measurement.
Rather than measure the 3dB above from the peak of the resonance curve which leads to absurd high Q value (see last pic) one has to measure the -3dB loss from the maximal reflected energie level down*. Pic 1 shows this, pic 2 confirms the measurement conditions in the complex plane. This kind of go ahead as consistent as be shown in many textbooks.
(Nasa report also shows how to measure the correct value!)
Marker 1 shows the center of the peak
Marker 5 & 6 marks the -3dB border measured from the reference Marker 4 at -0.9dB.
Q=f/df
Q=fcenter/fhigh - flow

The result is a loaded Q of ~1055 which is a natural value for a steel cavity resonator in this frequency regime.
(The result is backed by EMPro calculations also)

In contrast to the upper result while using measurements like in the last pic, 3dB above the peak the calculation leads to a Q of ~22000 what ist quite unrealistic for this frequency, mode and cavity material.

*Please note that this simple metode is usable under impedance matched conditions, for great over- or under-coupling the situation is more complicated and the calculation is more complex. Here the simple -3dB approximation condition is not longer useful.
For more details see here:
http://www-elsa.physik.uni-bonn.de/Lehrveranstaltungen/FP-E106/E106-Erlaeuterungen.pdf

Is simple to directly measure the unloaded Q of a resonant cavity by using TC = Qu / (2 Pi Fres)

With an empty cavity, apply a resonant Rf signal and measure the time it takes forward power to increase from the initial value of 0 to 63.2% of the final value. That is 1 TC time. Then Qu = 1 TC time X 2 Pi Fres.
« Last Edit: 04/01/2016 09:54 PM by TheTraveller »
"As for me, I am tormented with an everlasting itch for things remote. I love to sail forbidden seas.”
Herman Melville, Moby Dick

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Q measurement of a resonator with matched antenna impedance

In the orginal EMDrive forum at NSF there where discussions how to measure the Q in general.

This sample measurement shows the difference of the correct Q measurement in contrast to a bad one. The DUT is a conical cavity resonator made of steel with a conductivity of ~1.4e6 S/m, the mode is TE011 @ 24.192318GHz, the antenna is optimized to excite this mode shape and the antenna feed contains adjusting elements for impedance match. The following example shows a S11 measurement.
Rather than measure the 3dB above from the peak of the resonance curve which leads to absurd high Q value (see last pic) one has to measure the -3dB loss from the maximal reflected energie level down*. Pic 1 shows this, pic 2 confirms the measurement conditions in the complex plane. This kind of go ahead as consistent as be shown in many textbooks.
(Nasa report also shows how to measure the correct value!)
Marker 1 shows the center of the peak
Marker 5 & 6 marks the -3dB border measured from the reference Marker 4 at -0.9dB.
Q=f/df
Q=fcenter/fhigh - flow

The result is a loaded Q of ~1055 which is a natural value for a steel cavity resonator in this frequency regime.
(The result is backed by EMPro calculations also)

In contrast to the upper result while using measurements like in the last pic, 3dB above the peak the calculation leads to a Q of ~22000 what ist quite unrealistic for this frequency, mode and cavity material.

*Please note that this simple metode is usable under impedance matched conditions, for great over- or under-coupling the situation is more complicated and the calculation is more complex. Here the simple -3dB approximation condition is not longer useful.
For more details see here:
http://www-elsa.physik.uni-bonn.de/Lehrveranstaltungen/FP-E106/E106-Erlaeuterungen.pdf

Is simple to directly measure the unloaded Q of a resonant cavity by using TC = Qu / (2 Pi Fres)

With an empty cavity, apply a resonant Rf signal and measure the time it takes forward power to increase from the initial value of 0 to 63.2% of the final value. That is 1 TC time. Then Qu = 1 TC time X 2 Pi Fres.
This is your interpretation, for me at this single frequencies (-3dB below the max reflection) it's simple the resulting reflection coefficient (of an oscillating RLC circuit) for a cavity near the resonant frequency. It also describes a defined amount of stored RF energy at given frequencies for this circuit..
It simply defines the bandwidth per definition as well as the dimensionless Q value.
I agree its possible to calculate the time for a given circuit to reach a defined amount of stored energy as you state like for simple a capacitor, when all circuit parameter frequency,R,L&C are known as well as the power of the source**.
I dont know how helpful this definition could be to explain the generation of thrust in the case of a turncated conical cavity. As a first step I would try to understand steady state/CW conditions until try to explain pulse driven emdrive resonators. You may do it using the way you like.


**Or normalized to simply "1".
« Last Edit: 04/03/2016 12:08 AM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
If I have the time I could check this using EMPro within the next week's. Using the eigen resonance solver the resonant frequency and the Q will be displayed directly for each mode.

For now I only have the results of the spreadsheet with the approximation formulas.
It would be great if you check this with EMPro.  It seems to me that to maximize Q one wants to maximize

∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA

this means minimizing  ∫ ElectromagneticEnergy dA

while maximizing

∫ElectromagneticEnergy dV

this means a mode shape that would have most of the high electromagnetic energy in the interior volume of the cavity instead of the exterior of the cavity near the metal
I have not forgotten this calculation, but have to much other things to do at the moment.
I will check it if I have time while future calculations at our local university.

Optimizing dielectric and shape
« Reply #81 on: 04/22/2016 10:31 PM »
Longtime lurker, seldom poster.

I've been quite interested in the fact that the NASA experiments, which show more thrust than any theory yet predicts, also has a cavity only partially filled with dielectric material. Not fully, but partially, at the small end only. That leads to the following thoughts, based on the McCullogh theory.

One thing about a dielectric is that it will have an index of refraction n substantially above 1, which implies that the photons within the dielectric move slower than c by some amount. McCulloghs's theory implies that photons have mass (or perhaps virtual mass may be a better term) that is modulated by Unruh radiation, and that during resonance, the photons gain mass while moving within the big end, and lose mass while moving within the small end. The presence of a partial dielectric indicates that in the NASA experiment, the photons would necessarily be moving faster in the big end (while more massive) and slower in the small end (while less massive). If in fact photons do have mass, this is exactly what we would expect from the relativistic change in v caused by the dielectric. In other words, the "dielectric effect" would enhance the effect from the shape of the frustum alone. So far, so good for McCullogh.

If this is true, this also suggests that perhaps the optimum dielectric would be a layered one, starting with a layer of very high n at the small endplate, followed by another of slightly lower n, and another of slightly lower n, until the entire cavity is filled (with perhaps the final layer being air or vacuum, next to the big endplate). Thus the photons would be in states of continuously differential v during the entire traverse of the cavity.

One potential issue with that approach is that any photons not perfectly axial (or parallel to the axis) would refract in odd ways at the interfaces, possibly increasing the number of wall collisions and hence degrading Q.

But this could be avoided if the sides of the frustum were not flat, but rather flaring, like the bell of a trumpet. By computing the angle of refraction at every interface, (which depends on the differing values of n) it would be possible to build a frustum with a layered dielectric and a flared bell, so that a photon could travel along the wall of the cavity, not touching, as it passed through successive refractions in the dielectric. If the endplates were also spherical instead of flat, perfect reflection would allow the photons to retrace their paths exactly.

Just a thought.
The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka” but “That's funny...”
—Isaac Asimov (1920–1992)

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
The relationship between radiation PRESSURE, ENERGY DENSITY and LAGRANGIAN DENSITY

In several EM Drive anomalous force discussions there is confusion as the relationship of the radiation pressure to the electromagnetic fields and a lack of appreciation of the dependence of the radiation pressure to the energy density.  We will address this:

1) ENERGY DENSITY

The electromagnetic energy density is defined as:



where

E is the electric field;
D is the electric displacement field;
B is the magnetic flux density;
H is the magnetic field

For linear, nondispersive and isotropic(for simplicity) materials, the constitutive relations can be written as

,

where

ε is the electric permittivity of the material;
μ is the magnetic permeability of the material

therefore

u = (ε/2)E·E +  (1/(2μ))B·B

we will write, as short nomenclature for the dot product of a vector with itself, and where the E and B fields are out of phase by 90 degrees, and have monochromatic (single frequency) dependence on angular frequency ω :

E2 = E·E
      =(Er2+Eθ2+Ez2)((Sin(ωt))2 (cylindrical coordinates)
      =(Eθ2+Eφ2+Er2)((Sin(ωt))2 (spherical coordinates)

B2 = B·B
      =(Br2+Bθ2+Bz2)((Cos(ωt))2 (cylindrical coordinates)
      =(Bθ2+Bφ2+Br2)((Cos(ωt))2 (spherical coordinates)

where
((Sin(ωt))2 =˝(1- Cos(2ωt))
((Cos(ωt))2=˝(1+ Cos(2ωt))

u = ˝ (εE2 +  (1/μ)B2)

Therefore the energy density varies with time around its cyclic average with a frequency 2ω that is twice the frequency ω of the electromagnetic fields E and B.

The "microscopic" version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H



recalling the identity between the electric permittivty, magnetic permeability and the speed of light:



(all terms in this identitity: c, εo and μo are Lorentz invariants:  they have the same value for observers in different moving reference frames)

we obtain:

u = (εo/2) (E2 +  c2B2)



2) UNITS of PRESSURE and of ENERGY DENSITY

Notice that the units of Energy Density are energy per unit volume Energy/Volume or (Force*Length)/Volume, while the units of pressure are force per unit area Force/Area.  Since Volume = Area * Length, it is obvious therefore that pressure and energy density have exactly the same units.  This is not a coincidence, as we will show.



3) MAXWELL STRESS TENSOR

Maxwell's stress tensor components, for arbitrary orientation of the unit cube in an ortogonal coordinate system, is defined as:







or, equivalently, in terms of the energy density:

σij = εo EiEj +  (1/μo) BiBj  - u δij

the pressures are the diagonal Maxwell tensor components, given by

σii = εo (Ei)2+  (1/μo) (Bi)2  - u

(this is an important equation which we will be using in further derivations below) 

σii =- u + εo ( (Ei)2+ c2 (Bi)2 )

PRINCIPAL VALUES of stress

At every point in a stressed body there are at least three planes, called principal planes, with normal vectors called principal directions, where there are zero shear stresses (the stress matrix off-diagonal components are zero). The three stresses normal to these principal planes are called principal stresses (they are the only non-zero components in the matrix: the diagonal components).

Every second rank tensor (such as the stress tensor) has three independent quantities, which are invariant under rotation, associated with it. One set of such rotational invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors.

The eigenvalues are the roots of the Cayley–Hamilton theorem. The principal stresses are unique for a given stress tensor.  A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix:



The three eigenvalues (principal values) of Maxwell stress tensor are:

1, σ23} =


The first term under brackets, and the two terms inside the radical are invariant under rotation(*).  The first rotationally invariant term is the energy density: the addition of E⋅E and c2B⋅B.  The energy density is invariant under rotations but it is not invariant, in general, under Lorentz transformations (it can change from one frame of reference to another frame of reference in special relativity, and therefore Sommerfeld states that the energy density has no meaning independent of a specific frame of reference).  The second invariant, the difference between E⋅E and c2B⋅B, is called the "Lagrange density" by Arnold Sommerfeld in his masterwork "Electrodynamics" for good reason: since it can be shown that this expression is the Lagrangian density of the electromagnetic fields in vacuum (see: http://bado-shanai.net/Map%20of%20Physics/mopEMLagrangianDensity.htm ).   It is a scalar that is invariant under a Lorentz transformation (this scalar does not change from one frame of reference to another frame of reference in special relativity) as well as invariant under rotations. The final rotationally invariant term, which is also a Lorentz invariant, is the dot product of the electric vector field with the magnetic vector field: E⋅B, divided by the square of the impedance of the vacuum:



To bring the stress tensor to diagonal form, one must rotate the reference axes to a reference system in which the vectors E and B (at a given point in space and at a given time) are parallel to each other or where one of them is equal to zero.  Such a transformation is always possible except when both conditions occur: a) E and B are mutually perpendicular (E⋅B=0) and b) when the Lagrange density is zero, such that E⋅E = c2B⋅B.  Both these quantities are Lorentz invariants, so when the electromagnetic fields are mutually perpendicular and equal to each other, they are so in any coordinate system under any Lorentz transformation, and we see that two of the three eigenvalues are zero in that case because the expression under the radical sign is zero.

Comparing the first principal stress with the previous expression for energy density, one readily identifies that the first principal stress is the negative of the energy density (an invariant under rotations but not under Lorentz transformations):

σ1 = - u

another invariant, a Lorentz invariant and rotation invariant: the "Lagrangian Density", the first term inside the square root of the double-valued terms can be simplified by noticing that

- u + εo E2 =  (εo E2-  (1/μo) B2)/2

therefore:

σ2 =(+u - εo E2)√(1+(((εoo)E·B)/(- u + εo E2))2)

σ3 =(- u + εo E2)√(1+(((εoo)E·B)/(- u + εo E2))2)

When the dot product (also called scalar product) of the electric and vector fields is zero, that is when

E·B = 0

then one gets the simple relations between the principal stresses and the energy density:

σ1 = - u

σ2 = +u - εo E2

σ3 = - u + εo E2


Notice that, in general, only two of the three principal stresses (σ2 and σ3) are Lorentz invariants, while the first principal stress (σ1), is not, in general, a Lorentz invariant.

Although, in general, the value of u is different for observers in different moving reference frames, the value of
+/- (u - εo E2) is the same for observers in different moving reference frames.


The "volumetric or hydrostatic stress" is the summation of all three principal stresses acting on the unit of volume:

σ1 + σ2 + σ3 = - u

and it is therefore equal to the negative of the electromagnetic energy density.

Also notice that, since the charge density is εo E the second term in the last two expressions is the Coulomb pressure "p", the charge density times the electric field

p=εo E2,

(the Coulomb tensile force per unit area), so we can write:

σ1 = - u

σ2 = + u - p

σ3 = - u + p

In words: when the dot product of the electric and vector fields is zero, the principal stresses are just a function of the energy density and the Coulomb pressure.

The Coulomb pressure p=εo E2 is twice the so-called electrostatic pressure ˝ εo E2, which arises from

σ3 = - u + p
    = -(εo/2) (E2 +  c2B2)+ εo E2
    = (εo/2) (E2 -  c2B2)

The quantity in brackets: E2 -  c2B2 is a "Lorentz" invariant of Maxwell's stress tensor (see p. 378 Chapter 21 of Classical Electricity and Magnetism by Panofsky and Phillips)

Which for zero magnetic field B = 0 gives the electrostatic pressure as:

σ3 = (εo/2) E2

For zero magnetic field B = 0, and a unidirectional electric field E (with no transverse components)  the principal axes are oriented such that σ3 is parallel to the direction of E while the axes of σ1  and σ2 are perpendicular to E.  The electric field transmits a tension σ3 = (εo/2) E2 in the direction of the E field and transverse pressure of equal absolute magnitude σ12 =- (εo/2) E2, in the directions perpendicular to the E field.

In the general case for non-zero E and B, one of the principal stresses is the negative of the energy density, while the other two stresses are the difference between the energy density and the Coulomb pressure, the first one with a plus sign and the second one with a minus sign.

Also notice that in general, under all situations, two of the principal stresses are compressive, while the other principal stress is tensile.  The cross-section experiencing a tensile principal stress and an equal compressive principal stress in the perpendicular direction experiences a state of pure shear: whereby a square will deform into a rectangle and a circle will deform into an ellipse:



If the energy density is greater than the Coulomb pressure, then the tensile stress is the second principal stress. 

If the Coulomb pressure is greater than the energy density, then the tensile stress is the third principal stress.

When the Coulomb pressure is negligibly small when compared to the magnetic field times the square of the speed of light c , the principal stresses are very simply due to the internal energy: two of the principal stresses are compressive: equal to the negative of the energy density, and the third principal stress is tensile: equal to the energy density.

This explains, for example, when Dr. H. White discusses his theory of the Quantum Vacuum as an explanation for the EM Drive he discusses the importance of the energy density, and the fact that the stress is tensile in one principal direction and compressive in the other two orthogonal directions.



4) TRANSVERSE ELECTRIC (TE) RESONANCE OF A TRUNCATED CONE WITH SPHERICAL ENDS

In order to solve for the stresses in an EM Drive with spherical ends, we’ll need to work in a coordinate system in which the cavity walls have a simple description. I will use the same coordinate system as used by Greg Egan (credit to Egan for this image showing the coordinate system) and several textbooks to solve electromagnetic resonance problems in conical cavities. 

Resonant Modes of a Conical Cavity
by Greg Egan
http://gregegan.customer.netspace.net.au/SCIENCE/Cavity/Cavity.html

The truncated spherical cone is positioned so that its longitudinal axis of axi-symmetry lies on the z-axis of our spherical coordinate system, its would-be apex lies at the origin, and in spherical coordinates (r, θ, φ) — where r is the distance from the origin, θ is the polar angle from the z-axis, and φ is the azimuthal angle from the x–z plane — the side walls of the cone will be defined by θ =+/- θw, its narrow end will be defined by r = r1, and its wide end will be defined by r = r2.



BOUNDARY CONDITIONS AT SPHERICAL ENDS


The boundary conditions for the electromagnetic fields at the small end (defined by r = r1), and at the big end (defined by r = r2) are:

Er = 0  because the longitudinal electric field is zero in a TE mode
Eθ = 0 because electric fields parallel to a conductive surface must be zero
Eφ = 0 because electric fields parallel to a conductive surface must be zero

Therefore the electric vector field at both spherical ends, for a TE mode is identically zero:

E = 0

Br = 0  because magnetic fields perpendicular to a conductive surface must be zero
Bφ = 0 because the azimuthal (transverse) magnetic field is zero in a TE mode

Therefore the only electromagnetic field that is non-zero at both spherical ends, for a TE mode, is the magnetic field in the polar direction Bθ.  This is a general result, that holds for any and all TE mode shapes.

BOUNDARY CONDITIONS AT CONICAL WALLS

The boundary conditions for the electromagnetic fields at the conical side walls (defined by θ = +/-θw) are:

Er = 0  because the longitudinal electric field is zero in a TE mode
Eθ = 0 because the polar electric field is zero for TEmnp mode shapes with m=0
Eφ = 0 because electric fields parallel to a conductive surface must be zero

Therefore the electric vector field at the conical side walls, for a TEmnp mode with m=0, is

E = 0

Bθ = 0  because magnetic fields perpendicular to a conductive surface must be zero
Bφ = 0 because the azimuthal (transverse) magnetic field is zero in a TE mode

Therefore the only electromagnetic field that is non-zero at the conical side walls, for a TEmnp mode with m=0, is the magnetic field in the radial direction Br

STRESS AT SPHERICAL ENDS

Since as previously shown, the boundary conditions at the spherical ends are such that there is only one non-zero electromagnetic field component, then:

*the shear stress components are zero, and hence the stress field is a principal stress at the ends

Also the dot product of the electric and magnetic fields is zero: E·B=0

Also, since the electric vector field is zero at the spherical ends:

E=0

it immediately follows that, for all (TE) Transverse Electric mode shapes:

* Coulomb's pressure and the electrostatic pressure are zero at the spherical ends (p=0)
* the stress at the spherical ends is compressive and entirely due to the energy density
* the stress is entirely due to the magnetic component in the polar direction, parallel to the end plate

σ3 = - u

σ3 =- (εo/2) ( c2B2)

σrr = - (1/(2μo)) (Bθ Cos(ωt))2
    = - (1/(4μo)) (Bθ)2(1+ Cos(2ωt))

Thus the stress varies from a minimum value of zero to its maximum compressive value, never reversing sign, at a frequency which is twice as high as the frequency of the electromagnetic fields.


STRESS AT CONICAL WALLS

Since as previously shown, the boundary conditions at the conical side walls are such that there is only one non-zero electromagnetic field component, then:

*the shear stress components are zero, and hence the stress field is a principal stress at the conical side walls

Also the dot product of the electric and magnetic fields is zero: E·B=0

Also, since the electric vector field is zero at the conical side walls, for TEmnp mode shapes with m=0:

E=0

it follows that, for TEmnp mode shapes with m=0:

* Coulomb's pressure and the electrostatic pressure are zero at the conical side walls (p=0)
* the stress at the conical walls is compressive and entirely due to the energy density
* the stress is entirely due to the magnetic component in the longitudinal direction, parallel to the conical side walls

σ3 = - u

σ3 =- (εo/2) ( c2B2)

σθθ = - (1/(2μo)) (Br Cos(ωt))2
     = - (1/(4μo)) (Br)2(1+ Cos(2ωt))

Thus the stress varies from a minimum value of zero to its maximum compressive value, never reversing sign, at a frequency which is twice as high as the frequency of the electromagnetic fields.



STRESS AT ALL INTERNAL SURFACES FOR TE0np MODES

* all shear stress components are zero, and hence the stress field is a principal stress
* the stress is compressive
* the stress varies from zero to its maximum compressive value, never reversing sign, at a frequency which is twice as high as the frequency of the electromagnetic fields.
* the electric vector field is zero at all internal surfaces
* Coulomb's pressure and the electrostatic pressure are zero
* the stress is entirely due to the energy density
* the stress is entirely due to the magnetic component parallel to the surface


The above applies to the mode shapes used in the EM Drive experiments that have claimed the highest force/InputPower.

Prof. Yang has used TE012 mode shapes in her experimental claims. 

Shawyer has used mode shape TE012 in his Demonstrator experimental claim and reportedly used mode shape TE013 (according to NSF user TheTraveller) in his Boeing Flight Demonstrator experimental claim.

NASA's reported experiment with the highest force/InputPower has involved mode shape TE012.

Shawyer's hypothesis that there is no pressure on the side walls is entirely falsified.  Shawyer is wrong: there is pressure, and hence a force component on the side walls of a truncated conical cavity with spherical ends: this pressure is entirely due to the magnetic field component parallel to the wall.  This pressure has nothing to do with the electric field components.


5) TRANSVERSE MAGNETIC (TM) RESONANCE OF A TRUNCATED CONE WITH SPHERICAL ENDS


BOUNDARY CONDITIONS AT SPHERICAL ENDS


The boundary conditions for the electromagnetic fields at the small end (defined by r = r1), and at the big end (defined by r = r2) are:

Eθ = 0 because electric fields parallel to a conductive surface must be zero
Eφ = 0 because electric fields parallel to a conductive surface must be zero and because TM mode

Br = 0  because magnetic fields perpendicular to a conductive surface must be zero and because TM mode
Bθ = 0 because the polar magnetic field is zero for TMmnp mode shapes with m=0

Therefore the only electromagnetic fields that are non-zero at both spherical ends, for TMmnp mode shapes with m=0, are

* the electric field in the longitudinal direction Er perpendicular to the surface
* the magnetic field in the azimuthal ("transverse") direction Bφ parallel to the surface


BOUNDARY CONDITIONS AT CONICAL WALLS


The boundary conditions for the electromagnetic fields at the conical side walls (defined by θ =+/- θw) are:

Er = 0  because electric fields parallel to a conductive surface must be zero
Eφ = 0 because the azimuthal (transverse) electric field is zero in a TM mode

Br = 0 because the longitudinal magnetic field is zero in a TM mode
Bθ = 0  because magnetic fields perpendicular to a conductive surface must be zero

Therefore the only electromagnetic fields that are non-zero at the conical side walls are:

* the electric field in the polar direction Eθ perpendicular to the surface
* the magnetic field in the azimuthal ("transverse") direction Bφ parallel to the surface

STRESS AT SPHERICAL ENDS

Since as previously shown, the boundary conditions at the spherical ends are such that there is only one non-zero electric field component and only one non-zero magnetic field component, then:

*the shear stress components are zero, and hence the stress field is a principal stress at the ends

Also the dot product of the electric and magnetic fields is zero: E·B=0 because the non-zero electric field component has a zero magnetic field component in the same direction, and vice-versa.

Therefore the state of stress is a principal stress composed simply of the energy density and Coulomb's pressure:

σ3 = - u + p
    = -(εo/2) (E2 +  c2B2)+ εo E2

using the aforementioned boundary conditions and assuming a standing wave solution such that the electromagnetic fields E and B are 90 degrees out of phase, one obtains:


σrr =  (εo/2) ((Er Sin(ωt))2 -  c2(Bφ Cos(ωt))2)
    =  (εo/4) ( ((Er)2 -  c2(Bφ )2) - ((Er)2 +  c2(Bφ )2)Cos(2ωt)) )


The cyclic-average stress is compressive if the magnetic field in the azimuthal (transverse) direction, times the speed of light, is greater than the electric field perpendicular to the surface, and tensile otherwise:

compressive for  c Bφ > Er
zero for              c Bφ = Er (zero cyclic average)
tensile for           c Bφ < Er

The stress is zero at the very center of the spherical ends, corresponding to the intersection with the axis of axi-symmetry.

Calculations show that tension occurs only in regions that are near the center (closer to the axis of axi-symmetry), while compression occurs further away from the center: further away from the axis of axi-symmetry.   The occurrence of tension is also dependent on the truncated cone geometry.

The higher "p" is in mode shapes TM0np, the smaller the amplitude of tension and the larger the region over which there is compression, and the larger the amplitude of compression.

The absolute value of the compressive stress maximum amplitude is always larger than the one for the tensile stress, even for the lowest order mode shape. In other words, the integral of the stress distribution: the force, is compressive, because compression takes place over a larger area, and it has greater absolute amplitude.

STRESS AT CONICAL WALLS

Since as previously shown, the boundary conditions at the conical side walls are such that there is only one non-zero electric field component and only one non-zero magnetic field component, then:

*the shear stress components are zero, and hence the stress field is a principal stress at the conical side walls

Also the dot product of the electric and magnetic fields is zero: E·B=0 because the non-zero electric field component has a zero magnetic field component in the same direction, and vice-versa.

Therefore the state of stress is a principal stress composed simply of the energy density and Coulomb's pressure:

σ3 = - u + p
    = -(εo/2) (E2 +  c2B2)+ εo E2

using the aforementioned boundary conditions and assuming a standing wave solution such that the electromagnetic fields E and B are 90 degrees out of phase, one obtains:


σθθ =  (εo/2) ((Eθ Sin(ωt))2 -  c2(Bφ Cos(ωt))2)
    =  (εo/4) ( ((Eθ)2 -  c2(Bφ )2) - ((Eθ)2 +  c2(Bφ )2)Cos(2ωt)) )


The cyclic-average stress is compressive if the magnetic field in the azimuthal (transverse) direction, times the speed of light, is greater than the electric field perpendicular to the surface, and tensile otherwise:

compressive for  c Bφ > Eθ
zero for              c Bφ = Eθ (zero cyclic average)
tensile for           c Bφ < Eθ

The higher "p" is in mode shapes TM0np the larger the area over which the stress is tensile, and the higher the amplitude of the tensile stress.  However, the amplitude of the compressive stress is always larger than the amplitude of the tensile stress.  The integral of the stress distribution: the force, is compressive, because compression takes place over a larger area, and it has greater absolute amplitude.

Following is an image from Greg Egan (cited above) showing the stress integrated over the circumference, and thus showing the force per unit length distribution along the length of the conical side walls.  The red curve corresponds to TM011, the green curve corresponds to TM012 and the blue curve corresponds to TM013:





STRESS AT ALL INTERNAL SURFACES FOR TM0np MODES

* all shear stress components are zero, and hence the stress field is a principal stress
* the stress is due to two components: a compressive component due to the magnetic energy density and a tensile component due to the electrostatic pressure
* the compressive stress is entirely due to the magnetic component parallel to the surface, in the transverse (azimuthal) direction
* the tensile stress component is entirely due to the electric field component perpendicular to the surface
* the stress integrated over the surface, the force, is compressive , in other words, the magnetic energy density effect predominates over the electrostatic pressure.  The stress can have small tensile regions, where the electrostatic pressure exceeds the magnetic energy density.


Again, Shawyer's hypothesis that there is no pressure on the side walls is entirely falsified.  Shawyer is wrong: there is pressure, and hence a force component on the side walls of a truncated conical cavity with spherical ends.

___________________________________
NOTES:
(*) Given the two vector fields E and B, the only way to form rotational invariants is to form dot products, which gives E⋅E, B⋅B, and E⋅B . One can do any arithmetic operations with them that one likes and still get a rotational invariant, although it's not guaranteed to be a Lorentz invariant.  It is not guaranteed to be a Lorentz invariant because a magnetic field in one moving frame may be seen as an electric field in a different moving frame, and vice-versa (since E and B are not Lorentz invariant quantities).   One observer’s E field is another’s B field (or a mixture of the two), as viewed from different moving reference frames.   To form Lorentz invariants one has to be able to express them as a linear combination of the inner products of the field strength tensor or its dual (https://en.wikipedia.org/wiki/Electromagnetic_tensor) with themselves, or between themselves:




Also see: https://en.wikipedia.org/wiki/Classification_of_electromagnetic_fields#Invariants , https://en.wikipedia.org/wiki/Electromagnetic_tensor#Properties and https://en.wikipedia.org/wiki/Lorentz_scalar .

(**) Mode shape nomenclature is adopted as per the cylindrical cavity (with constant circular cross section) designation, because there is no standardized way to number truncated cone mode shapes.  I am aware that there is no mode shape for a truncated cone with electromagnetic fields constant in the longitudinal direction, unlike cylindrical cavities which have TM mode shapes with "p=0".  Still, because the truncated cone geometries used up to now have shapes that are not too far from a cylinder with constant cross section (because small cone angles are used and the cones are truncated far from the cone vertex) it is possible to use a cylindrical cavity mode shape designation and select m,n,p accordingly.   
« Last Edit: 05/26/2016 03:48 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
POYNTING VECTOR

The Poynting vector, in the macroscopic version of Maxwell's equations of Abraham is defined as the cross product of the electric field vector E with the magnetic field vector H :

SA= E x H

while in the Mynkowski version of Maxwell's equations it is defined as:

SM= c2  D x B

In the microscopic version of Maxwell's equations one can use the constitutive equation relating H and B, or relating D and E , to obtain the same expression for both the Abraham and the Mynkowski versions of Maxwell's equations:



where μ0 is the magnetic permeability of the vacuum.

The Poynting vector is a useful expression, because the electromagnetic momentum density in the vacuum is given by the Poynting vector divided by the square of the speed of light in vacuum, c2:



Or, recalling that   

pem = εo ExB

Unfortunately, the Poynting vector S is not, in general. a Lorentz invariant.  Recall that the energy density u is not, in general, a Lorentz invariant either.  However, if one subtracts the dot product of the Poynting vector with itself from the square of the energy density , one obtains the following scalar, which is a Lorentz invariant:

u2 - S · S/c2

Although the value of u and S may be different for observers in different moving reference frames, the value of
u2 - S · S/c2 is the same for observers in different moving reference frames.


(this fact is not as commonly known.  I have not found it in Wikipedia, but it can be found here: p.82 of Introduction to Tensor Calculus, Relativity and Cosmology, 3rd ed. Edition, D. F. Lawden, or one can prove it by inspection)

TRAVELLING WAVES

An electromagnetic plane wave is an idealization that, although not practical, is important as the simplest solution to Maxwells' equations, and therefore often used in the literature to explore the fundamental physics of a problem.

Such a "monochromatic" (=single-frequency) electromagnetic plane wave is a one-dimensional wave traveling in the z direction with a phase velocity (the rate at which the phase of the wave propagates in space) equal to the speed of light in vacuum: vp=λ/T=ω/k=c (since the index of refraction of vacuum is n=1, vp=c/n=c).   Such a plane electromagnetic wave represents a purely transverse vibration without any longitudinal electromagnetic component along the z axis of propagation.  In this restricted sense, it is like the transverse vibration of a stretched string.

It has the electric field E and the magnetic field B oriented perpendicular to each other, in the transverse direction to the direction of propagation. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), E and H are exactly in phase.






Recall that the boundary conditions at the metallic spherical ends of the EM Drive require that the electric fields parallel to a conductive surface be zero.  Since the electric field in a plane wave is perpendicular to the direction of travel of the plane wave, this means that the electric field of the plane wave is parallel to the surface and therefore that it must be zero at the metal surface.  Recall that plane wave is a travelling wave, so this means that the ideal plane wave cannot really satisfy the boundary conditions at the surface of a conductor, because, for a travelling wave, the amplitude of the fields must vary sinusoidally with time at any specified location, therefore the plane travelling wave cannot satisfy the boundary conditions of a conductive wall:



The fact that a plane electromagnetic wave travelling normal to a conductive surface cannot satisfy the boundary conditions at the conductive wall  is not just a problem for this idealized plane wave.  It is a problem for any travelling wave that has an electric field perpendicular to the direction of propagation.  Therefore, this problem also exists for travelling waves in waveguides: they are only valid solutions for waveguides that do not have conductive walls perpendicular to the direction of propagation. The following image from Wikipedia shows the electric field  component perpendicular to the direction of propagagation for mode shape TE31 inside a hollow metal waveguide with rectangular cross-section (observe how the electric field at the end of the waveguide oscillates like a sinusoid and therefore one cannot impose the condition that the electric field should be zero, constant with time, at a conductive surface):



STANDING WAVES

A travelling wave with electric field perpendicular to the direction of propagation cannot satisfy the boundary condition that the electric field parallel to the surface of a conductive material must be zero.  To satisfy the boundary condition one needs a standing wave:  a standing wave results from the constructive and destructive interference of two counter propagating travelling waves.  In the following image (from acs.psu.edu) observe how two counter-propagating travelling waves can create a node at which the electromagnetic field is zero:

Therefore, a standing wave makes it possible to insert a conductor (a conductive wall perpendicular to the standing wave) at any of the nodes where the tangential electric field is zero without changing the structure of the electric field!

Therefore we arrive at the conclusion that Shawyer statement "there are travelling waves and standing waves in the EM Drive" and therefore that self-acceleration of the EM Drive can be explained somehow by single travelling waves in an open waveguide is incorrect, because only standing waves can meet the boundary conditions at the metallic surfaces: resulting from the interference of two travelling waves propagating in opposite directions.   This applies to sinusoidal standing waves (as in a cavity with constant cylindrical cross-section) as well as to spherical Bessel function standing waves (as in a truncated cone with spherical ends), or any other kind of standing waves.  Single travelling waves cannot meet the boundary conditions.  In order to meet the boundary conditions for a problem with conducting walls, standing waves are necessary.
« Last Edit: 05/25/2016 05:44 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
The relationship between radiation PRESSURE and the POYNTING VECTOR

A common misconception is that the radiation pressure can always be obtained from the Poynting vector simply by the cyclic average of the Poynting vector divided by the speed of light:



In a "monochromatic"(=single frequency) plane electromagnetic travelling wave, the Poynting vector points in the direction of propagation while oscillating in magnitude at twice the frequency of the travelling wave. The time-averaged magnitude of the Poynting vector is:

<S> = ˝(1/μo)Re([Em] x [Bm*])

where Em is the peak-amplitude of the (complex) electric field Em eiωt, and Bm* is the complex-conjugate peak-amplitude of the magnetic field Bm eiωt, which, in a plane wave are exactly in phase with each other. If the electromagnetic fields are described in terms of their root mean square (rms) values then the factor of ˝ should be replaced by a factor of 1.

The problem with using the cyclic average of the Poynting vector divided by the speed of light to obtain the radiation pressure is that the electric field parallel to a conductive surface must be zero, and that the electric field in a plane electromagnetic wave propagating normal to a conductive surface is parallel to the surface, and hence it should be zero.  Hence at the surface of the conductor, E=0, therefore Em = 0 and therefore the Poynting vector must be zero to satisfy the boundary conditions at the conductive surface: S=0 and <S>=0.  There cannot be real power transmitted by a planar electromagnetic wave into a conductive surface. 

One arrives at the conclusion that either

1)  the electric field component parallel to the wall must be zero, hence the electromagnetic plane wave has zero amplitude electric field everywhere.  Therefore its Poynting vector is zero and the radiation pressure is zero at the wall

or

2) the electromagnetic wave satisfying the boundary condition cannot be an electromagnetic plane wave, as somehow the electromagnetic wave's electric field has to decay to zero to match the boundary conditions at the conductive wall.

Now, you may say: this is similar to the mechanical momentum of a ball hitting a perfectly reflecting surface, it makes sense that the momentum at the wall should be zero, because the velocity of the ball at the surface is zero.  What happens at the wall is the interference of two momenta: the momentum of the ball hitting the wall interfering with the opposite momentum of the ball bouncing from the wall.  Similarly, in the electromagnetic case of a wave hitting a conductive surface one gets a standing wave, that results from the interference of a travelling wave propagating against the wall and bouncing back, (counter-propagating) from the wall.  You may say OK, I agree with that, but why can't I take the momentum of the planar electromagnetic field far away from the wall, in the far-field, its Poynting vector far away, where the electric field has a well-defined sinusoidal value and let's don't worry too much about the details of what really happens to my travelling plane wave as it hits the conductive surface and has to satisfy the boundary condition.   I think that this is what many introductory texts may be assuming (actually if one looks at many introductory texts that discuss using the time-average of the Poynting vector as a measure of radiation pressure they seldom discuss a conductor: instead they usually just say: an absorbing surface or a perfectly-reflecting surface).

A single travelling plane electromagnetic wave cannot exist inside a resonant cavity: it cannot meet the boundary conditions at the metal surfaces.  The solution for a conical waveguide comprises spherical propagating waves.  The solution for a truncated conical cavity comprises spherical standing waves.
Even considering a cavity with constant cross-section, the standing wave solution responsible for transverse electric (TE) or transverse magnetic (TM) modes varies sinusoidally in the lengthwise direction.  In a truncated cone the standing wave solution varies like a spherical Bessel function in the lengthwise direction of the cavity, such that the wavelength becomes longer as one approaches the small end of the cone:



The relationship between radiation PRESSURE and the ENERGY DENSITY

There is a more general approach to calculate the radiation pressure, which instead of considering the Poynting vector, considers the pressure as being due to the energy density, as for example in the following discussion by Richard Fitzpatrick, Professor of Physics at The University of Texas at Austin: http://farside.ph.utexas.edu/teaching/em/lectures/node90.html

In this more general approach, the radiation pressure is obtained from the cyclic time-average of the energy density:

Pradiation = <u>

Such an analysis is consistent with the derivation in https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577 that the radiation pressure, for the case in which the Coulomb pressure is zero, is really due to the energy density.

As we have shown, one cannot simply calculate the radiation pressure in a resonant cavity using the cyclic average of the Poynting vector divided by the speed of light:



since 1) the Poynting vector is zero at the metal walls for TE modes and 2) the Poynting vector does not have a constant magnitude in the longitudinal direction for standing waves in a cavity, for any mode shape.  On the other hand, as we have shown, one can certainly calculate the radiation pressure at the walls of resonant cavity based on the energy density value at the wall for TE modes, or in general, based on the energy density and the Coulomb pressure at the wall for any mode shape, hence obtaining the radiation pressure from the energy density is a more general approach than obtaining it from the Poynting vector field.
« Last Edit: 05/26/2016 09:04 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
If I have the time I could check this using EMPro within the next week's. Using the eigen resonance solver the resonant frequency and the Q will be displayed directly for each mode.

For now I only have the results of the spreadsheet with the approximation formulas.
It would be great if you check this with EMPro.  It seems to me that to maximize Q one wants to maximize

∫ElectromagneticEnergy dV/ ∫ ElectromagneticEnergy dA

this means minimizing  ∫ ElectromagneticEnergy dA

while maximizing

∫ElectromagneticEnergy dV

this means a mode shape that would have most of the high electromagnetic energy in the interior volume of the cavity instead of the exterior of the cavity near the metal
I have not forgotten this calculation, but have to much other things to do at the moment.
I will check it if I have time while future calculations at our local university.
http://forum.nasaspaceflight.com/index.php?topic=39214.msg1508105#msg1508105
http://forum.nasaspaceflight.com/index.php?topic=39214.msg1508376#msg1508376

I got something to share on the question.  :)
Remember, the Basic idea was:
Due to the linearity of the Maxwell equations the mode number of a given mode shape impacts on the resulting Q. In fact, the higher the mode number the higher the Q as long as the cavity dimensions remains constant. In this case we want to focus on the "p" value (of TXmnp*). The mode of discussion is TE01p.

We did calculations using a spreadsheet/exact solution to calculate the Q and the resonant frequency for this mode family for low modenumbers.

Now this calculations has been proved/verified using Keysight-EMPro.

I could follow the TE01 mode for the Brady-cone dimensions (Comsol calculation by Frank Davies, NASA) up to TE016.
I found the Q increase linear with the index "p" for this mode (of course the higher the modenumber the higher the resonant frequency).
For higher values (TE017,TE018,...) the modes was either not present due to degeneration or I simply couldn´t find it. The higher the Resonant frequency the lower the frequency distance to other modes and maybe its unlikely to excite this higher "p" value modes because in relation to the wavelength at higher frequencies the diameters are to big. Its more easy to satisfy lowest energy conditions of higher order modes like TE02p or TE03p for diameters much larger than half the wavelength.
There was a TE0 mode near the calculated TE017 frequency (shown below). Again I couldn't find higher than TE016 due to whatever.

Look at the TE015 mode and its location of the maximum energy density   :o

*Using mode notation for a cylindrical cavity because the cone half angle θ is small and there is no convention for the notation of modes inside of a truncated conical cavity resonator.

Notice the datafiles below the pics.
« Last Edit: 05/25/2016 06:18 PM by X_RaY »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
I found the Q increase linear with the index "p" for this mode (of course the higher the modenumber the higher the resonant frequency).

Looks like about a 10.4% increase in Q for each higher mode number. Nice!

Increasing the size of the frustum lowers the resonant frequency. Just build a bigger frustum...  ;D
« Last Edit: 05/18/2016 07:43 PM by Monomorphic »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I found the Q increase linear with the index "p" for this mode (of course the higher the modenumber the higher the resonant frequency).

Looks like about a 10.4% increase in Q for each higher mode number. Nice!

Increasing the size of the frustum lowers the resonant frequency. Just build a bigger frustum...  ;D
Yes this was proved before(in this thread). Now we know** the Q also increase with the mode index. :)

That was the thought and I liked to satisfy since a quarter year or so, using professional software without the restrictions of feko lite. ;)


**At least the prediction holds using the Keysight-EMPro FEM algorithm.
« Last Edit: 05/19/2016 05:49 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Just picked the TE015 frequency and tried to find it using FEKO Lite and got it during the first run. The Result (even for the coarse mesh in feko lite) is almost as good as the EMPro calculations.
« Last Edit: 05/22/2016 07:59 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
The relationship between radiation PRESSURE and the POYNTING VECTOR

A common misconception is that the radiation pressure can always be obtained from the Poynting vector simply by the cyclic average of the Poynting vector divided by the speed of light:



In a "monochromatic"(=single frequency) plane electromagnetic travelling wave, the Poynting vector points in the direction of propagation while oscillating in magnitude at twice the frequency of the travelling wave. The time-averaged magnitude of the Poynting vector is:

<S> = ˝(1/μo)Re([Em] x [Bm*])

where Em is the peak-amplitude of the (complex) electric field Em eiωt, and Bm* is the complex-conjugate peak-amplitude of the magnetic field Bm eiωt, which, in a plane wave are exactly in phase with each other. If the electromagnetic fields are described in terms of their root mean square (rms) values then the factor of ˝ should be replaced by a factor of 1.

The problem with using the cyclic average of the Poynting vector divided by the speed of light to obtain the radiation pressure is that the electric field parallel to a conductive surface must be zero, and that the electric field in a plane electromagnetic wave propagating normal to a conductive surface is parallel to the surface, and hence it should be zero.  Hence at the surface of the conductor, E=0, therefore Em = 0 and therefore the Poynting vector must be zero to satisfy the boundary conditions at the conductive surface: S=0 and <S>=0.  There cannot be real power transmitted by a planar electromagnetic wave into a conductive surface. 

One arrives at the conclusion that either

1)  the electric field component parallel to the wall must be zero, hence the electromagnetic plane wave has zero amplitude electric field everywhere.  Therefore its Poynting vector is zero and the radiation pressure is zero at the wall

or

2) the electromagnetic wave satisfying the boundary condition cannot be an electromagnetic plane wave, as somehow the electromagnetic wave's electric field has to decay to zero to match the boundary conditions at the conductive wall.

Now, you may say: this is similar to the mechanical momentum of a ball hitting a perfectly reflecting surface, it makes sense that the momentum at the wall should be zero, because the velocity of the ball at the surface is zero.  What happens at the wall is the interference of two momenta: the momentum of the ball hitting the wall interfering with the opposite momentum of the ball bouncing from the wall.  Similarly, in the electromagnetic case of a wave hitting a conductive surface one gets a standing wave, that results from the interference of a travelling wave propagating against the wall and bouncing back, (counter-propagating) from the wall.  You may say OK, I agree with that, but why can't I take the momentum of the planar electromagnetic field far away from the wall, in the far-field, its Poynting vector far away, where the electric field has a well-defined sinusoidal value and let's don't worry too much about the details of what really happens to my travelling plane wave as it hits the conductive surface and has to satisfy the boundary condition.   I think that this is what many introductory texts may be assuming (actually if one looks at many introductory texts that discuss using the time-average of the Poynting vector as a measure of radiation pressure they seldom discuss a conductor: instead they usually just say: an absorbing surface or a perfectly-reflecting surface).

A single travelling plane electromagnetic wave cannot exist inside a resonant cavity: it cannot meet the boundary conditions at the metal surfaces.  The solution for a conical waveguide comprises spherical propagating waves.  The solution for a truncated conical cavity comprises spherical standing waves.
Even considering a cavity with constant cross-section, the standing wave solution responsible for transverse electric (TE) or transverse magnetic (TM) modes varies sinusoidally in the lengthwise direction.  In a truncated cone the standing wave solution varies like a spherical Bessel function in the lengthwise direction of the cavity, such that the wavelength becomes longer as one approaches the small end of the cone:



The relationship between radiation PRESSURE and the ENERGY DENSITY

There is a more general approach to calculate the radiation pressure, which instead of considering the Poynting vector, considers the pressure as being due to the energy density, as for example in the following discussion by Richard Fitzpatrick, Professor of Physics at The University of Texas at Austin: http://farside.ph.utexas.edu/teaching/em/lectures/node90.html

In this better approach, the radiation pressure is obtained from the cyclic time-average of the energy density:

Pradiation = <u>

Such an analysis is consistent with the derivation in https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577 that the radiation pressure, for the case in which the Coulomb pressure is zero, is really due to the energy density.

As we have shown, one cannot simply calculate the radiation pressure in a resonant cavity using the cyclic average of the Poynting vector divided by the speed of light:



since 1) the Poynting vector is zero at the metal walls for TE modes and 2) the Poynting vector does not have a constant magnitude in the longitudinal direction for standing waves in a cavity, for any mode shape.  On the other hand, as we have shown, one can certainly calculate the radiation pressure at the walls of resonant cavity based on the energy density value at the wall for TE modes, or in general, based on the energy density and the Coulomb pressure at the wall for any mode shape, hence obtaining the radiation pressure from the energy density is a more general approach than obtaining it from the Poynting vector field.

Is it possible that thrust generation is based on the magnetic field vectors (rotation vector of the standing wave{curl_H}) acting at the sidewall(s) instead of radiation pressure at the end pates??

I mean unbalanced force of eddy currents in the two opposite directions and therefore different values of reactive forces along these directions.
« Last Edit: 05/26/2016 09:06 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
Is it possible that thrust generation is based on the magnetic field vectors (rotation vector of the standing wave{curl_H}) acting at the sidewall(s) instead of radiation pressure at the end pates?

In here https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577:

I show that for TE modes the stress at the conical walls is compressive and entirely due to the energy density, due to the magnetic component in the longitudinal direction, parallel to the conical side walls.  Therefore if thrust is real for mode shapes TE012 and TE013 used by Shawyer, it must be due to the magnetic field vector parallel to either the conical walls, and/or the magnetic field vector parallel to the end plates.
« Last Edit: 05/26/2016 08:59 PM by Rodal »

Offline VAXHeadroom

  • Full Member
  • *
  • Posts: 192
  • Whereever you go, there you are. -- BB
  • Baltimore MD
  • Liked: 249
  • Likes Given: 140
...
Is it possible that thrust generation is based on the magnetic field vectors (rotation vector of the standing wave{curl_H}) acting at the sidewall(s) instead of radiation pressure at the end pates?

In here https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577:

I show that for TE modes the stress at the conical walls is compressive and entirely due to the energy density, due to the magnetic component in the longitudinal direction, parallel to the conical side walls.  Therefore if thrust is real for mode shapes TE012 and TE013 used by Shawyer, it must be due to the magnetic field vector parallel to either the conical walls, and/or the magnetic field vector parallel to the end plates.

In the meep simulation which I have subsequently animated (December 2015), the magnetic fields are most definitely NOT parallel to the small end or the side sloped walls.  They ARE parallel to the big end plate.
Original Post: http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619
Emory Stagmer
  Executive Producer, Public Speaker UnTied Music - www.untiedmusic.com

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
Is it possible that thrust generation is based on the magnetic field vectors (rotation vector of the standing wave{curl_H}) acting at the sidewall(s) instead of radiation pressure at the end pates?

In here https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577:

I show that for TE modes the stress at the conical walls is compressive and entirely due to the energy density, due to the magnetic component in the longitudinal direction, parallel to the conical side walls.  Therefore if thrust is real for mode shapes TE012 and TE013 used by Shawyer, it must be due to the magnetic field vector parallel to either the conical walls, and/or the magnetic field vector parallel to the end plates.

In the meep simulation which I have subsequently animated (December 2015), the magnetic fields are most definitely NOT parallel to the small end or the side sloped walls.  They ARE parallel to the big end plate.
Original Post: http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619

The reason for that is because you are solving Meep using Cartesian coordinates and you have the end plates parallel to one of the Cartesian axes while you have the conical walls not oriented parallel to any of the Cartesian axes.  This is a bad choice of coordinates and a bad choice for display of the results.  In the exact solution of the problem one uses instead spherical coordinates such that the conical walls are aligned along the spherical radius, while the polar angle coordinate is perpendicular to it.  This leads to a clear satisfaction of the boundary conditions.  The situation for Meep is made worse by the fact that Meep uses a Finite Difference discretization approach and it is not based on a variational principle.  I have told this to aero and Shell a long time ago.  The situation was so  bad that it was almost impossible for aero to tell what mode shape was being excited in Meep (due to the poor choice in what was being displayed). The choice of images is very poor and has lead to misinterpretation of the fields by NSF Meep users.  NSF Meep users would be well advised to at least do what FEKO, COMSOL and other Finite Element packages do standard out of the box: instead of plotting the fields in Cartesian axes components, they plot the norm (https://en.wikipedia.org/wiki/Norm_(mathematics)) of the vector field (which is invariant under rotations).  This leads to a much better physical understanding of what is going on.

You should be plotting the vector fields with their vector magnitude and vector orientation, in order to get a physical feeling for what is going on.

This is a good example of what a good plot looks like (FEKO) (this is showing the electric vector field: observe how FEKO shows it to be perpendicular to the conical walls, correctly satisfying the boundary conditions):



Also X-Ray has been showing good simulations using FEKO.  And previously and concurrently, X_Ray has been also showing great simulation with EM Pro.  I was also showing simulations using the exact solution, with the fields in spherical coordinates, and also plotting vector fields.  All of these much better than the Meep displays of fields in Cartesian coordinates which lead to rampant confusion. (The human brain is not very good at vector decomposition particularly when the electromagnetic fields are displayed as colored contours in Cartesian axes that are not oriented along the walls).

Another huge problem with NSF Meep displays has been that the electromagnetic fields have been displayed in colors without any numerical scheme to ascertain the numerical significance of the fields.  This is bad practice, FEKO and COMSOL always give you the numerical values corresponding to the contour fields being plotted. 

Meep was intended to be used NOT as a black box like FEKO or COMSOL but it was intended as an open source code where students could write their own code to solve particular problems.

In any case, the boundary conditions are what they are, they can be found in standard introductory textbooks and not at all a subject of controversy.  The boundary condition dictates that the magnetic field perpendicular to the conductive wall should be zero and the only non-zero component of the magnetic field should be the component parallel to the conductive wall.  When you have a wall that is oriented at angle to the Cartesian axes, then the boundary conditions act on both Cartesian components.  To satisfy the Boundary Condition at a wall at an angle to the Cartesian axes you naturally have to have non-zero component along both Cartesian directions, in general (vector decomposition).  You should be plotting the vector fields with their vector magnitude and vector orientation, in order to get a physical feeling for what is going on.
« Last Edit: 05/29/2016 02:40 AM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Further to this issue about Boundary Conditions, Jackson has an excellent discussion in his masterpiece Classical Electrodynamics, 3rd Edition, pages 352 to 356  (ISBN-10: 047130932X; ISBN-13: 978-0471309321).  Pay special attention to the graph on page 355, Fig. 8.2  "Fields near the surface of a good, but not perfect conductor".  For a good but not perfect conductor, for example copper as used in the EM Drive, and as modeled in Meep with the Drude equation model, a very tiny magnitude electric field parallel to the surface will be present, as well as a very tiny magnetic field perpendicular to the surface.  The electric field parallel to the surface is inversely proportional to the square root of the conductivity:



This solution exhibits the expected rapid exponential decay (skin depth), and Pi/4 phase difference.  For a good conductor, the fields inside the metal conductor are parallel to the surface and propagate normal to it, with magnitudes that depend only on the tangential magnetic field parallel to the surface, that exists just outside the surface of the metal conductor.

As shown in the graphs in Jackson's book, and as one can readily calculate these fields are practically zero, insignificant for copper, due to its very high conductivity.  Therefore it is not a surprise that Meep does not show a significant difference in the fields calculated using the perfect conductor model vs. the Drude model.  The main influence of the finite conductivity Drude model is to allow the calculation of a finite Q (instead of an infinite Q).  But again, the boundary condition is such that the electric field parallel to the surface and the magnetic field perpendicular to the surface must be practically zero at the surface, due to the very high conductivity of copper.  This is particularly so for the experiments of EM Drive where experimenters seek a high Q, which is tantamount to practically zero fields for these variables.  It is completely inconsistent for EM Drive experimenters to advocate a high Q (Shawyer even claiming to research superconducting EM Drives) and not realize that these boundary conditions are such that these fields must be practically zero at the surface of the good conductor.

NOTE: for those not having ready access to Jackson's monograph, the following discussion by a professor at Duke University is also good:  https://www.phy.duke.edu/~rgb/Class/phy319/phy319/node59.html
« Last Edit: 05/29/2016 03:01 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
My biggest problem with the meep pics was that the single pics only represent single vector components (most of what we saw was autoscaled), so this has generated a lot of confusion.

Offline VAXHeadroom

  • Full Member
  • *
  • Posts: 192
  • Whereever you go, there you are. -- BB
  • Baltimore MD
  • Liked: 249
  • Likes Given: 140
My biggest problem with the meep pics was that the single pics only represent single vector components (most of what we saw was autoscaled), so this has generated a lot of confusion.

Just to clear up any possible confusion as to what the animation in that post shows:
http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619

The animation in the linked post is 3D vectors plotted with origin, direction, and amplitude, of the H field for two slices across the X plane (x=0, x=10).  There is a min(always 0) and max (for scale) in numbers along with the frame number on the top left.  All values are in 'meep units' which aero tells me should be multiplied by 3.33 to get to 'engineering units'.

The meep simulation is 291x390x327 elements (~37M elements) (on the order of 1/100 of the wave size), and ~3.2 degrees of phase per frame (112 frames within one waveform).  The E fields are indeed parallel to the surfaces.  The H fields are not.
Emory Stagmer
  Executive Producer, Public Speaker UnTied Music - www.untiedmusic.com

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
My biggest problem with the meep pics was that the single pics only represent single vector components (most of what we saw was autoscaled), so this has generated a lot of confusion.

Just to clear up any possible confusion as to what the animation in that post shows:
http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619

The animation in the linked post is 3D vectors plotted with origin, direction, and amplitude, of the H field for two slices across the X plane (x=0, x=10).  There is a min(always 0) and max (for scale) in numbers along with the frame number on the top left.  All values are in 'meep units' which aero tells me should be multiplied by 3.33 to get to 'engineering units'.

The meep simulation is 291x390x327 elements (~37M elements) (on the order of 1/100 of the wave size), and ~3.2 degrees of phase per frame (112 frames within one waveform).  The E fields are indeed parallel to the surfaces.  The H fields are not.
Sorry to be blunt about this, but the following is my professional opinion from decades of experience with authoring and running Finite Difference and Finite Element methods.  I am being blunt in order to be helpful and clarify the issues.

That output is all wrong: it indicates something very bad going on in the modeling input and/or the postprocessing calculations leading to those bad results.   The E fields cannot be parallel to the conductive surface as shown in any basic course on Electromagnetism.  The H fields, on the other hand should BE parallel to the surface.  You write that the model shows the opposite.  Obviously there is something very wrong with the way that Meep calculations have been run at NSF EM Drive thread and / or with the calculations implemented to show those electromagnetic fields. 

I suggested a long time ago to aero that he should run a comparison between his Meep simulations and an exact solution.  I never saw any successful comparison of these NSF EM Drive Meep runs with any exact solution.   On the contrary, the comparisons were far off the mark: he could not even match the natural frequency for simple problems.  Instead of working on learning what has to be done to get a correct comparison with an exact solution, the emphasis appears to have been to model DIY builds, without prior validation that the modeling is being done correctly.

If so, this is not Meep's fault: rather it may be a modeling problem due to lack of previous experience and training with running Finite Difference computer models (so many things can go wrong in the modeling process): computer models need to be validated prior to proceeding to attempt to model actual experiments, as the computer paradigm is "Garbage In = Garbage Out".  The first step in anybody attempting to learn how to run Finite Difference and Finite Element models should be first to run a comparison between a computer model and an exact solution, for example the exact solution of resonance in a cylindrical cavity.  Then proceed to a comparison with Greg Egan's exact solution for a spherical truncated cone.  Then proceed to a comparison with COMSOL's FE analysis for several mode shapes by Frank Davis at NASA.   Otherwise one ends up with what you are discussing above: a model with a mesh having 291x390x327 elements (~37M elements) (do you mean nodes ?  there are no elements in Meep, since it is a Finite Difference program, not a Finite Element program) (*), and images where the NSF modeler cannot tell what mode shape has been excited and where the boundary conditions appear to not have been correctly satisfied.  How is one going to get the correct solution to a system of partial differential equations if the boundary conditions are not correctly satisfied?

-----------------------
(*) It is also important that computer modelers understand the software they are using: a Finite Difference code like Meep does not have elements with interpolating basis functions (usually polynomials) as in Finite Element analysis.  Instead a Finite Difference code like Meep relies on finite differences at each node, to model the partial differential equations.

« Last Edit: 05/30/2016 12:13 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
My biggest problem with the meep pics was that the single pics only represent single vector components (most of what we saw was autoscaled), so this has generated a lot of confusion.

Just to clear up any possible confusion as to what the animation in that post shows:
http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619

The animation in the linked post is 3D vectors plotted with origin, direction, and amplitude, of the H field for two slices across the X plane (x=0, x=10).  There is a min(always 0) and max (for scale) in numbers along with the frame number on the top left.  All values are in 'meep units' which aero tells me should be multiplied by 3.33 to get to 'engineering units'.

The meep simulation is 291x390x327 elements (~37M elements) (on the order of 1/100 of the wave size), and ~3.2 degrees of phase per frame (112 frames within one waveform).  The E fields are indeed parallel to the surfaces.  The H fields are not.
I am sure you (and aero) put many work into these calculations as well as in the visualisation. I am sorry but Dr.Rodal is absolutely right about the related physical basics. The E-field vector close to a conductive wall can´t be parallel to that wall and so on..  Boundary conditions of electromagnetic fields are non-negotiable ;)
« Last Edit: 05/29/2016 07:39 PM by X_RaY »

Offline VAXHeadroom

  • Full Member
  • *
  • Posts: 192
  • Whereever you go, there you are. -- BB
  • Baltimore MD
  • Liked: 249
  • Likes Given: 140
My biggest problem with the meep pics was that the single pics only represent single vector components (most of what we saw was autoscaled), so this has generated a lot of confusion.

Just to clear up any possible confusion as to what the animation in that post shows:
http://forum.nasaspaceflight.com/index.php?topic=39004.msg1467619#msg1467619

The animation in the linked post is 3D vectors plotted with origin, direction, and amplitude, of the H field for two slices across the X plane (x=0, x=10).  There is a min(always 0) and max (for scale) in numbers along with the frame number on the top left.  All values are in 'meep units' which aero tells me should be multiplied by 3.33 to get to 'engineering units'.

The meep simulation is 291x390x327 elements (~37M elements) (on the order of 1/100 of the wave size), and ~3.2 degrees of phase per frame (112 frames within one waveform).  The E fields are indeed parallel to the surfaces.  The H fields are not.
I am sure you (and aero) put many work into these calculations as well as in the visualisation. I am sorry but Dr.Rodal is absolutely right about the related physical basics. The E-field vector close to a conductive wall can´t be parallel to that wall and so on..  Boundary conditions of electromagnetic fields are non-negotiable ;)

Thank you both, back to the books :)
Emory Stagmer
  Executive Producer, Public Speaker UnTied Music - www.untiedmusic.com

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...
The momentum of light in media remains one of the most controversial topics in physics [1–6]. The debate has continued for more than a century since Minkowski and Abraham formulated 4 × 4 energy-momentum tensors in the early 1900s [7–9].
https://www.researchgate.net/publication/292342272_Kinetic-energy-momentum_tensor_in_electrodynamics
Professor Melcher at MIT (and previously Prof. Chu and others at MIT's Radiation Laboratory) was already showing decades ago in MIT classes that the Einstein-Laub formulation of electrodynamics is invalid since it yields a stress-energy-momentum tensor that is not frame invariant. (It is an interesting historical vignette that Einstein did not realize this at the time he wrote the paper with Laub)

They do credit Prof. Chu with the correct invariance relations.  But that is one of the reasons why Prof. Chu at MIT developed his formulation.

This was known by students that listened to the lectures of Prof. Chu and Melcher at MIT  ;)

Reference: Prof. Melcher's masterpiece "Continuum Electromechanics" which he wrote in 1972-1973 while he was in sabatical at Cambridge University working with Sir Taylor and G. Batchelor

Correct, although The Einstein-Laub formulation of electrodynamics does work in a real world lab. 1973, Ashkin and Dziedzic performed a experiment in which they focused a green laser beam on the surface of water and saw what they called "the toothpaste tube effect" where a bulge appeared in the surface of the water. Using a Lorentz formula the existence of expansive and compressing forces effectively cancel out, negating a possible bump forming on the water surface. The real world lab test showed where pure theoretical extraction detailing out why there are thrusts from a asymmetrical cavity are lacking. . .

This is why lab data is king right now.

Shell

Back to work... will be on later. Have company helping me move my new frame for the lab.

Ashkin and Dziedzic used an argon-ion laser source to investigate the pressure on solid dielectric spheres immersed in liquid, and the pressure on a liquid-air interface owing to the passage of a beam of radiation.

Ashkin and Dziedzic's experiment cannot distinguish between the various stress tensors (Abraham, Minkowski, Einstein-Laub, etc.), since it gives no information about the direction of the local surface force.

Ashkin and Dziedzic observed the liquid interface to bend outwards in the liquid both when the light enters and leaves the liquid. This is contrary to the conservation law he derives from a Lagrangian formalism, and which predicts inward bending.

Loudon (see: Loudon, R., Radiation pressure and momentum in dielectrics. Fortschr. Phys., 2004. 52(11-12): p. 1134-1140) concluded that:

Quote from: Loudon
The Ashkin and Dziedzic experiment observed an outward bulge on an illuminated water surface, apparently consistent with the sign of the Poynting surface force and the value of the Minkowski momentum. In agreement with Gordon, it is shown here that the effect is governed by a radial force and it provides no information on the longitudinal force associated with the linear momentum of light.
(bold added for emphasis)

One of the reasons why this Abraham Minkowski controversy persists is because of problems with the experiments:

1) Most experiments in the Abraham-Minkowski controversy (including Ashkin and Dziedzic) ignore electrostriction and magnetostriction.  A term should be added for the Helmholtz electrostriction term to Abraham’s and Minkowski’s tensor whenever necessary.  Both Abraham and Minkowski’s tensors do not describe electrostriction or magnetostriction. Their tensors, therefore, are unable to give a complete description of the local electromagnetic state in the medium.  It is unfortunate that  electrostriction and magnetostriction are ignored, as one of the things that Prof. Woodward has realized with his Woodward/Mach Effect experiments is the importance of  electrostriction or magnetostriction.  It can be misleading to perform an experiment and extract conclusions without assessing the significance of  electrostriction or magnetostriction.

2) The problem with Abraham-Minkowski experiments performed to date (and this goes also for EM Drive experiments) is that they are usually trying to measure only one term of the equation of motion or are trying to measure a boundary of a fixed block of dielectric material – so it is very difficult to consider all forces at the boundary. Instead of resolving the problem, contradictory experimental results have often clouded the picture further, because of incomplete analysis of very small forces, where not all the forces are properly taken into account.

------------------------------
At MIT, Professors P. Penfield Jr., H.A.Haus, Electrodynamics of Moving Media. 1967, Cambridge,Mass.: MIT Press. 241-243 provided a very detailed discussion showing that the force densities each comprise about 20 terms, most of the terms correspond to very small forces.

In Abraham-Minkowsk experiments, the majority of these forces are being ignored by the authors !

Authors in China, more than 50 years after MIT's work, are re-discovering this:

http://arxiv.org/pdf/1504.06437

Analytic derivation of electrostrictive tensors and their application to optical force density calculations
Wujiong Sun1,2, S. B. Wang2, Jack Ng3, Lei Zhou1, and C. T. Chan2*
1 State Key Laboratory of Surface Physics, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), and Collaborative Innovation Center of Advanced Microstructures, Fudan University, Shanghai 200433, China
2 Department of Physics and Institute for Advanced Study, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
3 Department of Physics and Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong
« Last Edit: 06/09/2016 03:26 PM by Rodal »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
CONSERVATION of MOMENTUM and calculation of forces

The following conservation of momentum equation can be easily shown to be automatically satisfied by Maxwell's equations, (without any extra conditions).  The conservation of momentum equation for an electromagnetic continuous closed system is



where J is the electric current density and ρ the electric charge density.
The electromagnetic momentum density is



In the above expressions, S is the Poynting vector field discussed here:  https://forum.nasaspaceflight.com/index.php?topic=39214.msg1529920#msg1529920

and σ is the stress tensor discussed here: https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577 , and  ∇·σ denotes the divergence of the stress tensor field.

The continuous form of the Lorentz force per unit volume, f, is defined as follows:



So, we can express the conservation of momentum equation in terms of the Lorentz force per unit volume, the derivative with respect to time of the Poynting vector, and the divergence of the stress tensor as follows:





Incorrect definitions of body force for unsteady behavior of the EM Drive in published papers

Several authors of papers attempting to calculate the EM Drive force incorrectly define the force as being due only to one term in the equations of motion, for example, as due only to the derivative with respect to time of the electromagnetic momentum density, or as being due only to the divergence of the stress tensor.  This is incorrect.  For general unsteady behavior, the body force is due to all these terms in the equation of motion and not just to one of these terms.  By defining the force on the EM Drive as being due to only one term in the equations of motion, these authors arrive at a completely incorrect result: that a solution of Maxwell's equations or a solution of Yang-Mills equations (which imply conservation of momentum) can lead to self-acceleration of the center of mass,  which is in complete contradiction with conservation of momentum.

1) Alexander Trunev, for example in his paper

"General Relativity and Dynamical Model of Electromagnetic Drive"
Alexander Trunev
Научный журнал КубГАУ, №116(02), 2016 года
http://ej.kubagro.ru/get.asp?id=5781&t=1

in his Equations (8), (13), (14), and (21) and (22), Trunev defines and proceeds to calculate forces for the EM Drive only defined  taking into account the derivative with respect to time of the Poynting vector and hence disregarding the balancing effect (in the equation of conservation of momentum) of the term corresponding to the divergence of the stress tensor.

2) Juan Yang.  Similarly in several of Juan Yang's papers, for example this one:

"Prediction and experimental measurement of the electromagnetic thrust generated by a microwave thruster system"
Yang Juan(杨涓), Wang Yu-Quan(王与权), Ma Yan-Jie(马艳杰), Li Peng-Fei(李鹏飞), Yang Le(杨乐), Wang Yang(王阳), and He Guo-Qiang(何国强)
Chin. Phys. B Vol. 22, No. 5 (2013)
cpb.iphy.ac.cn/EN/article/downloadArticleFile.do?attachType=PDF&id=53411

Yang similarly defines the force as being due to the time derivative of the Poynting vector (or, equivalently, in absence of body forces, (using the divergence theorem), as only due to the divergence of the stress tensor):

Quote from: Yang et.al.
Obviously the right hand of Eq. (8) is the EM force exerted on the EM field boundary of the limited closed volume

In Yang et.al., she correctly sets up the conservation of momentum equation:



which for f=0 (no charges and no electric currents inside the cavity of the EM Drive), simply states that the divergence of the stress tensor should equal the derivative with respect to time of the electromagnetic momentum, (hence no force on the center of mass-energy of the EM Drive).  Yet, Yang et.al. incorrectly define the force as only being due to either term separately (which are of equal magnitude) instead of being due to the difference of both terms (a difference which, under no external forces, is exactly zero).

3) Guido Fetta, in his paper

Guido P. Fetta. "Numerical and Experimental Results for a Novel Propulsion Technology Requiring no On-Board Propellant", 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Propulsion and Energy Forum,
http://dx.doi.org/10.2514/6.2014-3853

calculates the force, in Equations (4) and (5) of the above mentioned paper as being due to the time-averaged integral of the (negative of the) stress tensor component

- σ3 = u - p
    = u - εo E2
    =  ((1/μo) B2 - εo E2)/2
    =  (μo H2 - εo E2)/2

(See https://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577: this is the so-called "Lagrangian Density", which is an invariant under Lorentz transformations as well as an invariant under rotations).

This is wrong for general unsteady behavior, as in radio-frequency excitation of the EM Drive, the force should have been defined also taking into account the derivative with respect to time of the Poynting vector.



EXAMPLE 1. Two lumped masses connected with a spring, oscillating in space


This error is similar to making the following error in a harmonic oscillator. 

Define an (undamped) harmonic oscillator consisting of two masses connected by a spring, floating freely in space:



It is trivial to show that the equation of motion for this system is:

m d2x/dt2 + k x = 0

where:

x = x2 - x1 = distance between the two masses
m = m1*m2/(m1+m2) is the "reduced mass": ˝ of  the harmonic mean of the masses
k = spring stiffness
t = time

and that the solution is a simple harmonic motion of period 2 π √(m/k), the frequency is the reciprocal of this: f = (1/(2 π))√(k/m), and the “angular frequency” ω, is ω = √(k/m)

The center of mass, never accelerates under this vibration, only the positions of the two masses oscillate with respect to the fixed center of mass:



One can readily see from the reduced mass expression that if one mass is much lighter than the other mass, that the lighter mass will exhibit larger motion.  In the limit, if one mass is much greater, the much larger mass is practically immobile and the center of mass is located nearer to the center of mass of the larger mass.

In this equation of motion, the term m d2x/dt2  is due to the derivative with respect to time of the momentum, and hence it is analogous to the term due to the derivative with respect to time of the Poynting vector.  Similarly, the spring term kx is analogous to the term due to the divergence of the stress tensor (using the stress-strain equation).

Defining the force as being due only to the derivative with respect to time of the Poynting vector (as done by Trunev, or as effectively done by Yang) is analogous to as defining the force in the harmonic oscillator as the force on the mass due to inertia: m d2x/dt2, or since m d2x/dt2 =- k x , equivalently as defining the force as being due to the negative of the spring stiffness force.

This is an incorrect treatment of the problem.  As in the harmonic oscillator example described above the center of mass never accelerates.  An external force F is required to accelerate the center of mass, in which case the force is equal to

m d2x/dt2 + k x = F

The external force is equal to the addition of both terms: m d2x/dt2 and k x, and not just equal to one of them.



EXAMPLE 2.  Conservation of momentum for continuous non-relativistic media in unsteady motion

Another example is the Cauchy momentum equation that describes the non-relativistic momentum transport in any continuum media. This equation of motion can be written in convective (or Lagrangian) embedded coordinates that follow the material points:



where ρ  is the density (mass/volume) at the point considered in the continuum (for which the continuity equation holds), σ  is the stress tensor, and g  contains all of the body forces per unit mass (often simply gravitational acceleration). u  is the flow velocity vector field, which depends on time and space. The symbol D/Dt corresponds to the substantial or material derivative https://en.wikipedia.org/wiki/Material_derivative), such that  Dy/Dt, for a tensor field y is:



In the above equation describing conservation of momentum in non-relativistic continuum media, the term due to the divergence of the stress tensor ∇·σ is analogous to the divergence of the stress tensor ∇·σ term in the conservation of momentum equation for electromagnetic media.  The term ρ Du/Dt due to the derivative with respect to time of the momentum is analogous to the term (1/c2)∂S/∂t in the equation for conservation of electromagnetic momentum.

Note that it would be completely incorrect to define, for general unsteady behavior, the body force acting on the center of mass as being due only to the derivative with respect to time of the momentum ρ Du/Dt or being due only to the divergence of the stress tensor ∇·σ.  No, in general, for unsteady behavior, the body force must take into account both terms: the change of momentum with respect to time, as well as to take into account the divergence of the stress tensor.

Only for steady-state problems (for which the momentum does not change with time) can one define the body force to be equal in magnitude to the divergence of the stress tensor.  And for unsteady behavior, only for problems in which the divergence of stress is zero, would be correct to define the body force magnitude to be equal to the derivative with respect to time of the momentum.



Correct definition of body force for unsteady excitation of the EM Drive

Similarly, for a cavity with no electric charges and no electric currents inside the cavity, the proper force definition that would move the center of mass of the EM Drive would be:



where f is a force per unit volume acting on the center of mass.

The force being defined by Trunev and by Yang as just the derivative with respect to time of the Poynting vector is non-zero for a resonant cavity, for the same reason that the inertial force and the spring force are non-zero in a harmonic oscillator: it is due to the fact that energy goes from the electric field (that changes harmonically with time) to the magnetic field (that also changes harmonically with time but is out of phase) and vice-versa. In a harmonic oscillator, one has kinetic energy going into potential energy and vice-versa (with no movement of  the center of mass). For steady-state oscillations, as shown by Greg Egan, Maxwell's equation's solution shows that the cyclic time average of the derivative with respect to time of the Poynting vector and the cyclic time average of the divergence of the stress tensor are both zero.

However, for the transient problem of the EM Drive (not discussed by Greg Egan in his article), both the Poynting vector field and the stress tensor fields exhibit oscillations around an exponentially decaying growth with respect to time.  Thus, under transient radio-frequency excitation of a resonant cavity, the cyclic time-average of the Poynting vector field will not be zero.  This does not mean that there is a net force on the center of mass that can be explained by Maxwell's equations (or by Yang-Mills equations, which also satisfy conservation of momentum).  Rather, this force due to the time derivative of the Poynting vector changing exponentially with time is perfectly balanced by the force due to the divergence of the stress tensor, and vice-versa. 

Hence to correctly calculate the total force on the EM Drive's center of mass one has to calculate all the forces involved,



If one is to also consider electric charges and currents, one has to also take these forces into account (ρE and JxB), when calculating the total force on the center of mass.  The total force on the center of mass is composed of the time derivative of the Poynting vector, the divergence of the stress tensor, and the Lorentz force.  Just as in a harmonic oscillator where one has inertial forces and spring forces, and the center of mass will only move due to an external total force on the center of mass, similarly for the EM Drive one has to take into account all the forces, and thus it is incorrect for Trunev and Yang to define the force acting on the center of mass as being due to only one of these terms.  Also, just as in the harmonic oscillator one has to also consider a damping force (if there is damping present) in the EM Drive one also has to consider power loss terms if one is going to take into account the power loss which leads to a finite value of the quality of resonance Q (due to finite conductivity in the metal walls, for example).

For more information on how to correctly calculate forces for electromagnetic continua, see the classic text by Professor Melcher at MIT (sorry he is no longer with us):

Melcher, James R., Continuum Electromechanics. Cambridge, MA: MIT Press, 1981.  ISBN: 9780262131650

and particularly, the excellent monograph:

Paul Penfield Jr., H.A. Haus, Electrodynamics of Moving Media. 1967, Cambridge,Mass.: MIT Press. ISBN-13: 978-0262160193
« Last Edit: 06/10/2016 12:26 AM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
@FattyLumpkin
This is the best Cannae like design I could get out of FEKO LITE till now. Would need a full version to come closer to 
Monomorphic´s sims. 
https://forum.nasaspaceflight.com/index.php?topic=39772.msg1530441#msg1530441
« Last Edit: 06/13/2016 08:41 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
@FattyLumpkin
This is the best Cannae like design I could get out of FEKO LITE till now. Would need a full version to come close to 
Monomorphic´s sims.  :-\
https://forum.nasaspaceflight.com/index.php?topic=39772.msg1530441#msg1530441
Great work!

1) Have you also tried to model the Cannae device with EM Pro, for which you do not have the limitations of FEKO Light?

2) When you used EM Pro in the previous pages for other calculations, did you use the Finite Element module of EM Pro or the finite difference module of EM Pro?

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
@FattyLumpkin
This is the best Cannae like design I could get out of FEKO LITE till now. Would need a full version to come close to 
Monomorphic´s sims.  :-\
https://forum.nasaspaceflight.com/index.php?topic=39772.msg1530441#msg1530441
Great work!

1) Have you also tried to model the Cannae device with EM Pro, for which you do not have the limitations of FEKO Light?

2) When you used EM Pro in the previous pages for other calculations, did you use the Finite Element module of EM Pro or the finite difference module of EM Pro?
1)I did not test this using EMPro.
This FEKO run was the first try to use this kind of design.

2) The few sims I showed here were based on the FEM-Eigenresonance solver.
For the frustum design I did pure FEM calculations also(amplitude&phase over frequency) with antenna and antenna feed but for the argumentation the Eigenresonances are close enough compared to the much more time consuming frequency sweep FEM´s.
« Last Edit: 06/14/2016 04:57 AM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Further to this issue about Boundary Conditions, Jackson has an excellent discussion in his masterpiece Classical Electrodynamics, 3rd Edition, pages 352 to 356  (ISBN-10: 047130932X; ISBN-13: 978-0471309321).  Pay special attention to the graph on page 355, Fig. 8.2  "Fields near the surface of a good, but not perfect conductor".  For a good but not perfect conductor, for example copper as used in the EM Drive, and as modeled in Meep with the Drude equation model, a very tiny magnitude electric field parallel to the surface will be present, as well as a very tiny magnetic field perpendicular to the surface.  The electric field parallel to the surface is inversely proportional to the square root of the conductivity:



This solution exhibits the expected rapid exponential decay (skin depth), and Pi/4 phase difference.  For a good conductor, the fields inside the metal conductor are parallel to the surface and propagate normal to it, with magnitudes that depend only on the tangential magnetic field parallel to the surface, that exists just outside the surface of the metal conductor.

As shown in the graphs in Jackson's book, and as one can readily calculate these fields are practically zero, insignificant for copper, due to its very high conductivity.  Therefore it is not a surprise that Meep does not show a significant difference in the fields calculated using the perfect conductor model vs. the Drude model.  The main influence of the finite conductivity Drude model is to allow the calculation of a finite Q (instead of an infinite Q).  But again, the boundary condition is such that the electric field parallel to the surface and the magnetic field perpendicular to the surface must be practically zero at the surface, due to the very high conductivity of copper.  This is particularly so for the experiments of EM Drive where experimenters seek a high Q, which is tantamount to practically zero fields for these variables.  It is completely inconsistent for EM Drive experimenters to advocate a high Q (Shawyer even claiming to research superconducting EM Drives) and not realize that these boundary conditions are such that these fields must be practically zero at the surface of the good conductor.

NOTE: for those not having ready access to Jackson's monograph, the following discussion by a professor at Duke University is also good:  https://www.phy.duke.edu/~rgb/Class/phy319/phy319/node59.html

Further to this issue about Boundary Conditions, Wolfgang Panofsky and Melba Phillips give a very useful rule of thumb, in their book:

Classical Electricity and Magnetism
Wolfgang Panofsky and Melba Phillips
Second Edition
Addison-Wesley , 1962

in page 214, section 13-1:

the ratio of the tangential component of E to its normal component
and
the ratio of the normal component of H to its tangential component

for an interface between a dielectric and a conductive metal is given by the ratio of the skin depth to the wavelengh

For the EM Drive problem with a copper cavity at ~2 GHz, we know that the skin depth is in the order of a micrometer, while the wavelength is in the order of several tenths of a meter, therefore this ratio should be smaller than 10-5, therefore when seen in a graph, the magnetic vector field H should appear tangential to the metallic surface and the electric vector field E should appear normal to the metallic surface, since the perpendicular components of these fields are hundreds of thousands of times smaller at the metal interface.

If a numerical calculation shows otherwise, this indicates that there is something very wrong with the calculation ("Garbage in = Garbage out") and the source of this error should be explored
« Last Edit: 06/15/2016 06:29 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Q difference for spherical and flat endplates

As example for the sperical case I will use the numbers of Greg Egan that can be find on the following website:
 http://gregegan.customer.netspace.net.au/SCIENCE/Cavity/Cavity.html

Greg Egan, sperical end plates(These values assume an idealised smooth copper surface):
r1=25mm, r2=100mm, θ=20°
TE012, f=9.359GHz, Q=37,864

To get an idea of the difference we can make a simple relation as follows by using a similar cavity with flat endplates (diameters and length of the sidewall is the same as in the sperical case)
with recalculated dimensions in cylindrical coordinates:
SD=17.101mm, BD=68.404mm, L=70.477mm
a recalculation of θ with the rounded numbers gives a cone half angle of 19,99998° which is close enough for this approximation...
Using the formula for frequency and Q  as discussed earlier in this thread:
https://forum.nasaspaceflight.com/index.php?topic=39214.msg1476704#msg1476704
https://forum.nasaspaceflight.com/index.php?topic=39214.msg1473268#msg1473268
 we get:
TE012, f=10.953GHz, Q=30,963

The ratio of Q (sperical/flat)in this case is roughly 1.223.
This number may be different for other modes and cavity dimensions because it depends on the exact geometry, frequency, mode, surface resistance, volume to surface ratio and so on.
« Last Edit: 08/24/2016 09:26 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
This is re-posted here, to enable more direct finding of these calculations to use in the future for other orbits and spacecraft

I hope there is no confusion re the various projects going on at Cannae...as folks know I've been conducting a study of them
1) for the cubesat, they say they are orbiting it at less than 150 miles...not kilometers. Mr. Feta told me there would be no super-cooling of this device. I'm left to conclude that the cooling of their thruster will be passive and that the thruster will be kept in shadow at all times
...
So they are using miles instead of kilometers to specify an orbit?



150*1.60934=241

So their orbit is really 241 km ?

That makes a difference ! Thanks

I calculated with 150 km.  (I recall pointing this out weeks ago, when they first announced, that instead of using customary SI units, Cannae is using Miles, oh well  ;) )

Just like the probe that crashed on Mars years ago (different units !)

Will recalculate with 150 miles tomorrow (Is it US Miles ??? )


US Survey mile = International mile =1.60934 km
Nautical mile =  1.852 km
Roman mile = 1.481 km
Chinese mile = 0.5 km

Let's work through the numbers for Cannae's proposed Cubesat mission, using 240 km instead of 150 km:

http://cannae.com/cubesat/
http://cannae.com/cannae-is-developing-a-cubesat-thruster/
http://www.popularmechanics.com/science/energy/a22678/em-drive-cannae-cubesat-reactionless/

The publicity picture appears to show a larger than 1x3U Cubesat, the second link talks about a 6U Cubesat


Orbit (assume circular orbit at published distance, interpreted as US Miles)
ro=150 USmile *1.60934 km/USmile ~ 240 km

Orbital velocity (Assuming circular orbit at 240 km)

G=6.67408 * 10^-11 m^3 kg^-1 s^-2
M=5.972 * 10^24 kg (mass of the Earth)
R=6.371*10^6 m (mean radius of the Earth)
r = R + ro
  = 6.371*10^6 m+ 240*10^3 m
v=7765 m/sec

Drag Surface area: assume a minimum cross-sectional area, for a 1x3U Cubesat with cross-sectional drag surface  of 0.10m x 0.30 m, perpendicular to the orbital plane(this assumes that the solar panels are always parallel to the orbital velocity vector)
Assume minimum configuration:
1x3U Cubesat (Notice that picture shows a larger Cubesat and link discusses a 6U Cubesat.  The thrust necessary for larger Cubesats can be obtained by simple scaling of the appropriate cross-sectional area.  For example, a 2x3U Cubesat will have twice the minimum cross-sectional area of a 1x3U Cubesat)
A=0.10m *0.30m
  = 0.03 m^2

Drag coefficient
CD=2  (*)



Reynerson, "Aerodynamic Disturbance Force and Torque Estimation for Spacecraft and Simple Shapes Using Finite Plate Elements  Part I: Drag Coefficient"

https://www.researchgate.net/publication/221910818_Aerodynamic_Disturbance_Force_
and_Torque_Estimation_for_Spacecraft_and_Simple_Shapes_Using_Finite_Plate_Elements_
Part_I_Drag_Coefficient/figures?lo=1

de Vries, "Cubesat Drag Calculations "https://e-reports-ext.llnl.gov/pdf/433600.pdf
Olttroge et.al.,"An evaluation of Cubesat Orbital Decay",
http://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1144&context=smallsat

Atmospheric Density
MSISE­90 std atmosphere (for 240 km)
(References: 
This link enables the computation and plotting of any subset of MSIS parameters:
http://omniweb.gsfc.nasa.gov/vitmo/msis_vitmo.html
http://ccmc.gsfc.nasa.gov/models/modelinfo.php?model=MSISE ) (*)
@Mean solar activity rhoMean= 9.91 x 10^-11 kg m^-3
@Maximum solar activity rhoMax= = 4.08 x 10^-10 kg m^-3
rhoMax/rhoMean=4.117

The solar cycle is very important,

since air density, and hence drag, is very much dependent on solar activity.  It is what brought Skylab down ( https://en.wikipedia.org/wiki/Skylab#Solar_activity ):
Quote
British mathematician Desmond King-Hele of the Royal Aircraft Establishment predicted in 1973 that Skylab would de-orbit and crash to earth in 1979, sooner than NASA's forecast, because of increased solar activity. Greater-than-expected solar activity heated the outer layers of Earth's atmosphere and increased drag on Skylab.



Observe the chart below for where we are now, and the predicted activity in the future:



Drag Force
DMax =(1/2) CD rhoMax A v^2
         =(1/2) 2 (4.08 x 10^-10 kg m^-3) (0.03 m^2) (7765)^2
         =7.38*10^(-4) N
DMean = DMax/4.117
          =1.79*10^(-4) N

Solar radiation pressure is negligible: 4.5 (absorption) to 9 (reflection)  μN /m2, so radiation force will be less than 0.27 μN. (For LEO, the radiation pressure from the Earth is hard to model as it depends on cloud albedo, but it is smaller than solar and thus also negligible).

Mass = 1.33 kg/U
             3U =4 kg
             6U =8 kg

Acceleration due to Atmospheric Drag

For  3U, mass=4 kg
aMax= DMax/Mass
        =7.38*10^(-4) N/4 kg
        =1.85*10^(-4) m/s^2
aMean=DMean/Mass
         =1.79*10^(-4) N /4 kg
         =4.40*10^(-5) m/s^2

Maximum Power available from sunlight = 10 watts

From Cannae's announcement: http://cannae.com/cubesat/

Quote
Our thruster configuration for the cubesat mission with Theseus is anticipated to require less than 1.5 U volume and will use less than 10 watts of power to perform station keeping thrusting.
Effective power available , assuming a common-low-to-moderate-inclination circular orbit at 240 km altitude, as shown in this picture by Cannae:




and hence taking into account that solar panels will be experiencing eclipse ~ 50% of the time, and considering that solar panels must be kept always parallel to the orbital velocity vector, at all times)
P=(1/2) 10 watts (*)
 =5 watts

Assume no safety margin: SafetyMargin=1

Necessary thrust
TMax= SafetyMargin* DMax
        = 7.38*10^(-4) N
TMean= TMax /(rhoMax/rhoMean)
       = 1.79*10^(-4) N

Necessary Thrust/PowerInput

TMax /PowerInput= 7.38*10^(-4) /5 W
                          = 148 μN/W
TMean /PowerInput= 1.79*10^(-4) N /5 W
                          = 36 μN/W





Conclusion:

The orbit makes a big difference, concerning the requirements for such a mission.  While a 150 km would require ~1 milliNewton/Watt to ensure no deorbiting, an orbit of 240 km requires substantially less thrust/PowerInput.   Note that most Cubesat launches are at 300 - 400 km - The ISS maintains an orbit with an altitude of between 330 and 435 km by means of reboost manoeuvres using the engines of the Zvezda module or visiting spacecraft.


Cannae's mission for keeping in orbit for 6 months a Cubesat, assuming:

*  minimum configuration 1x3U Cubesat, with cross sectional area of only 0.03 m2
*  no safety margin
*  mean Solar activity
*  that the solar panels are kept always parallel to the orbital velocity vector (otherwise drag will be much greater)

requires a Thrust/PowerInput= 36 microNewton/Watt which is consistent with NASA's previously reported results for copper resonant cavities excited at ~2 GHz :

http://www.libertariannews.org/wp-content/uploads/2014/07/AnomalousThrustProductionFromanRFTestDevice-BradyEtAl.pdf 

http://emdrive.wiki/Experimental_Results

However, maximum Solar activity would require about 150 microNewton/watt. 

If the solar panels are not kept parallel to the orbital velocity vector at all times, drag will be much greater, and hence much greater thrust would be required.

Furthermore, this assumes no safety margin.

Also this is based on a minimum configuration 1x3U Cubesat, with minimum cross sectional area of only 0.03 m2

The Cannae publicity picture appears to show a larger than 1x3U Cubesat configuration instead, and if so, a larger cross-sectional area which would require a proportionally larger thrust force to overcome atmospheric drag.

The link http://cannae.com/cannae-is-developing-a-cubesat-thruster/ describes a 6U Cubesat.  If it is a 2x3U Cubesat, then the minimum cross-sectional area is twice what is calculated above and therefore the atmospheric drag will require twice the thrust calculated above for a 1x3U Cubesat.



Also, worthy of note when planning a 6 month mission in Low Earth Orbit:

I know I sound like a broken record, but I would really like to know how they plan to separate out thrust effects from the high variability of atmospheric density at those altitudes.

_________________________________________
(*) Thanks to Marshall Eubanks for providing these estimates. I am responsible for any errors in using them.
« Last Edit: 09/07/2016 06:57 PM by Rodal »

Offline Willem Staal

  • Member
  • Posts: 25
  • Netherlands
  • Liked: 6
  • Likes Given: 2
mr Rodal,

I am hugely interested in the theory of these EM Drives and i wonder:

Could the overall effect be a result of a phase transition of a  standing wave? A Dutch scientist called Christiaan Huygens observed  synchronization of pendulum clocks, and he discovered that at some times they run in phase or in anti-phase  due to vibations trought walls or on a table.

Could it be that a  a similar behavour occur also in a EM drive frustrum where the amplitude of the waves are truncated by the shape of the frustrum while the waves forced into sychronization phase? So the energy of the amplitude escapes somewhere trought the copper of the frustrum and then produce thrust? not as a detectable force but as a phase or anti phase synchronization event?

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Atmospheric drag on the sat is measurable, but it cannot be ameliorated by always keeping the sat solar array parallel to the orbital velocity vector

Incorrect statement.  On the contrary, it is well known that atmospheric drag is minimized by minimizing the cross-sectional area perpendicular to the drag force. 


Thus, the minimum drag orientation is indeed flying with solar array paralell to the orbital velocity vector. This is known in Aerospace Engineering as flying "edge on" or flying in the low-drag feathered position. Flying edge on significantly reduces drag (as it is easy to verify by calculation, since the drag is proportional to the cross-sectional area perpendicular to the drag force).

If flying a spacecraft with solar arrays on gimbals, in the edge-on position the alpha gimbal is fixed.  If desired, the alpha gimbal could be removed to lower weight.

This is what was assumed in my calculations: edge-on flying, with no gimbal, the solar array being fixed to the Cubesat, as in Cannae's picture.

The satellite must rotate about its own axis about once every 90 minutes to keep the array pointed at the sun.  This rotation will continue in Earth's shadow....
No.  There is no law that imposes such a solar-array orientation as the only option.

When designing a solar array orientation for a Low Earth Orbit (LEO) there are several options for orientation of the solar array:

1) Sun pointing

The spacecraft may maintain (if so desired) a fixed orientation with respect to Earth, and a gimbal (alpha gimbal) can be used to track the Sun as the spacecraft rotates in orbit.  A beta gimbal (rotation around the longitudinal axis of the solar arrays) can compensate for variations in the angle of the Sun to the orbital plane.

This is not the only alternative. 

You propose a more extreme version of the sun-pointing configuration where the solar-array is fixed to the Cubesat and hence the whole Cubesat has to rotate continuously in order to keep Sun-pointing all the time.


This is a flying configuration that produces a much greater drag force.
In addition, since you are using no gimbals, you have to rotate the whole spacecraft to accomplish your proposed Sun-pointing at all times.  Thus, you propose, as the only choice available, a flying configuration that produces greater drag, and that in addition requires rotating the spacecraft.

 If one calculates this, one arrives at the conclusion that flying with sun-pointing configuration at all times will require a thrust that exceeds the published claims for copper EM Drive. (The kind of EM Drive that Cannae is reporting will fly in this mission). (*)

2) Hybrid. For example Sun pointing during iluminated position of orbit, and edge on during eclipse (to minimize drag).  During eclipse the solar array can be gimbaled edge on to the orbital velocity vector, which will require a rotation of ~70 to 75 degrees twice per orbital period.

For example, the ISS adopts a hybrid solar array orientation: it points the solar array at the Sun (and takes the drag penalty) when in light, goes into a perpendicular mode in the dark. The ISS "furls" its solar panels when in darkness.

3) Edge on during the entire orbit.  This is the option that what was assumed to minimize drag in my  calculations, since EM Drive's (assuming that they would work somehow) are very limited in the thrust/PowerInput available.  This was made very clear in the calculations.

This is what was assumed: edge-on flying during the entire orbit, with no gimbal, the solar array being fixed to the Cubesat, as in Cannae's picture:



Flying with the solar-arrays "edge-on" means that the amount of power available from the solar arrays will be decreased.  This reduction was explicitly taken into account in my analysis !

Flying edge-on during the entire orbit, besides minimizing drag, has the advantage that it keeps the spacecraft facing the Earth at all times, which may be beneficial for missions to monitor the Earth.

This is not an option that is impossible, or that I invented "out of thin air".  It is a well-known configuration option.
See articles by G. Landis and C. Lu, (AIAA) and by Anigstein and Sanchez Pena (IEEE) on analysis of solar panel orientation in low altitude satellites.

You state that flying with the solar-arrays Sun-pointing all the time is the only option.  This is not so.  It is simple to run the numbers and show that the option you appear to consider as the only possible option (Sun-pointing) will require significantly greater thrust, and that according to published claims for a copper EM Drive (if it were to work as claimed) would not be able to overcome.

____
(*) As a minor detail, being picky, the sketch and absolutist demand for such a complete rotation once per orbit needs further consideration.  As we all know, the Earth rotates around the Sun once per _year_  (the Earth's orbit).  Hence, the sun rotates once per _year_ in an inertial reference frame tied to the Earth's orbit Not once per spacecraft_orbit_. 

The ISS itself rotates once per orbit to keep one side always looking at the Earth (it has an "up" and a "down" side - the cupola, for example, is on the down side and always points at the Earth) and that means that its solar panels _counterrotate_ when the Sun is up.

Hence the Sun-pointing option, besides involving greater drag, involves a level of complexity that is undesirable for a smallsat mission like the one proposed by Cannae. 

By contrast, the flying "edge on" or flying in the low-drag feathered position is much simpler, involving minimum drag, and complexity.
« Last Edit: 09/08/2016 08:54 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Low order TE0np modes in a "flying saucer" like cavity.

https://www.scientificsonline.com/product/large-parabolic-reflectors-12?gclid=Cj0KEQjwhvbABRDOp4rahNjh-tMBEiQA0QgTGge0SE4e6mkfaHf2PHMvrl0O0WC9sYiQ8hwtbo306-0aAgiR8P8HAQ

This design tends to change the mode-integer "n" rather than "p" (or "m")over frequency.

« Last Edit: 11/06/2016 04:59 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Added for conservation, Copy of the original post of the main EM-Drive thread
Is there anyone who has study a half-sphere shaped resonator regarding the emdrive?
In contrast to a parabolic one (where the focal depth for rays much shorter than the size of the structure itself was equal to the point where the baseplate was present).
http://forum.nasaspaceflight.com/index.php?topic=39214.msg1607020#msg1607020
 

Now I did an FEA with the half-sphere shape. What I found is a massive fieldstrength, much higher than I ever have observed  in the sims before. The Q should be very high.

1) What is the numerical analysis package you are using ? (FEKO, etc.)

2) What numerical technique are you using to solve the equations? (Finite Element Method?, Boundary Element Method?, Finite Difference Method Space Domain?)?

3) What is the type of solution method?

A) Is it an eigensolution to the eigenvalue problem where there is no antenna in the model?

B) Or a steady state solution using an antenna and a spectral method to obtain a solution?
 
C) Or a transient solution using an antenna and a Finite Difference Time Domain to obtain a solution?

D) If you used an antenna, with a spectral steady-state solution or a transient Finite-Difference-Time-Domain solution, what was the type of antenna and where was it located?

4) What are the boundary conditions that you use in the model? Are you assuming a perfect conductor?
If not, how are you modeling an imperfect conductor like copper?

5) How is the quality factor (Q) calculated?

6) How are eddy currents calculated in the model?

Thanks

1. FEKO
2. MOM & FEM
3. ?
A. No, no eigenvalue calculation,  magnetic Dipole (30mm above the flat plate at the central axis)
B. FEM
C. No FDTD

4.First time the boundary was defined to be PEC. Couldn't believe this numbers, therefore I used Copper, thickness 1mm for the second run (see diagrams).
Field pics are from the PEC-run.

5.No till now the Q is not calculated. My statement was due to the fieldstrength.  ::)


6. Good question, It's a internal calculation of FEKO, don't know their code ;)
« Last Edit: 11/08/2016 06:03 PM by X_RaY »

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
The White et al. 2016 paper (the leaked, non-peer reviewed version, which still can be downloaded from http://www.nextbigfuture.com/2016/11/new-nasa-emdrive-paper-shows-force-of.html):

I haven't read a lot of discussion of the paper yet. Some remarks can be made, though, and questions asked. It looks like a solid piece of work, greatly admirable engineering work and clear discussion. We don't know from when this version is (the pdf I downloaded doesn't give a creation date, only 26 Aug 2016 as modification date). It doesn't look like two years of work to me (but probably they could not work full-time on it). I am a bit disappointed that they don't show results of other dielectric inserts (what they probably did).

A few issues and questions:
- I wonder whether switching direction in their way, with the whole RF stuff (amplifier etc) attached to the large endplate, is the best to do. As they write, they had retuning problems when using a 'split configuration mode'. But if you use a flexible cable, it should be possible to turn only the cavity by 180 degrees and leave the RF stuff at the same position and orientation.
- Do I see a saturation effect around 60 W? See Figs. 13, 15 and 19. It does not seem to be so much work to perform, say, 100 measurements. Then they could have shown with statistics that there is a difference in force between the 60 W and the 80 W input, or not. Now that is not clear (their premise is probably that there SHOULD be a linear dependence on power: dangerous).
- I am still a bit worried about the liquid metal contacts they use to supply the DC power to the torsion balance (many amps!). It is not likely that these will give rise to the signals they observe, but I haven't seen a test of their influence on the measurement.

Maybe more later,
Peter.
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
« Last Edit: 11/13/2016 04:59 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Due to the request of  Jean-Philippe Montillet and Jose' Rodal I did simulations using FEKO (students edition) to plot the fields along the surfaces ot the Brady cone without dielectric insert for the TM010 mode.
I found the TM010 at 1020 MHz using an electrical dipole as source. Frustum material is copper.
Source power was defined to be 1Watt (30dBm).
For the final run I increased the coverge accuracy to the maximal level and have set the Mesh density to "Fine" (best when using automatic frequency dependend mesh).
Pleace note that the diagrams units are in dB, this is due to one of each E H curves is almost at the zero level when it is displayed linear.


EDIT
v2.png shows the same model with increased mesh and more points along the lines where the measurements is taken
« Last Edit: 01/22/2017 06:39 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Results for Brady cone with HDPE-disc at the small end plate. Source power was defined to be 1Watt (30dBm).
εR=2.27
tanδ=0.00031
DIA*=158.75mm
Height=54mm

* to simplify the model I used a diameter equal to the end plate diameter instead of the 156.7mm reported by EW
« Last Edit: 11/14/2016 04:35 PM by X_RaY »

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
"According to Woodward, who saw a copy of the paper shortly after it had been accepted for peer review, the main difference between the accepted copy and the leaked early release is that the latter has way more theory trying to explain the results. Supposedly the AIAA would only accept the paper if White and his colleagues ditched the quantum vacuum theory and just published the results of their research without trying to explain it."

http://motherboard.vice.com/read/the-fact-and-fiction-of-the-nasa-emdrive-paper-leak
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
The White et al. 2016 paper (the leaked, non-peer reviewed version, which still can be downloaded from http://www.nextbigfuture.com/2016/11/new-nasa-emdrive-paper-shows-force-of.html):

I haven't read a lot of discussion of the paper yet. Some remarks can be made, though, and questions asked. It looks like a solid piece of work, greatly admirable engineering work and clear discussion. We don't know from when this version is (the pdf I downloaded doesn't give a creation date, only 26 Aug 2016 as modification date). It doesn't look like two years of work to me (but probably they could not work full-time on it). I am a bit disappointed that they don't show results of other dielectric inserts (what they probably did).

A few issues and questions:
- I wonder whether switching direction in their way, with the whole RF stuff (amplifier etc) attached to the large endplate, is the best to do. As they write, they had retuning problems when using a 'split configuration mode'. But if you use a flexible cable, it should be possible to turn only the cavity by 180 degrees and leave the RF stuff at the same position and orientation.
- Do I see a saturation effect around 60 W? See Figs. 13, 15 and 19. It does not seem to be so much work to perform, say, 100 measurements. Then they could have shown with statistics that there is a difference in force between the 60 W and the 80 W input, or not. Now that is not clear (their premise is probably that there SHOULD be a linear dependence on power: dangerous).
- I am still a bit worried about the liquid metal contacts they use to supply the DC power to the torsion balance (many amps!). It is not likely that these will give rise to the signals they observe, but I haven't seen a test of their influence on the measurement.

Maybe more later,
Peter.

What is also missing in the paper, is quantitative data about their measurement device (torsion pendulum). Especially what the response is (you can estimate it a bit from the response to the electric calibration pulse, but some more detail would be justified in a study like this) and the resolution. And something about its drift.
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline WarpTech

  • Full Member
  • ****
  • Posts: 1113
  • Do it!
  • Vista, CA
  • Liked: 1226
  • Likes Given: 1593
Warp Tech's Updated Theory

Note: The thrust equation at the end, depends on the Neper frequency. This is [resistance x capacitance]-1.

If we assume this time period to be proportional to the diameter of the frustum, then the time-independent part of this equation reduces to @Notsosureofit's thrust formula. Where, the gradient derivative is expressed as the difference between two potentials.

« Last Edit: 11/16/2016 04:15 AM by WarpTech »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
 Jean-Philippe,
this is the result for the alternative situation you was looking for. Source power was defined to be 1Watt (30dBm) as before.

Regards
« Last Edit: 11/20/2016 05:02 PM by X_RaY »

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
"According to Woodward, who saw a copy of the paper shortly after it had been accepted for peer review, the main difference between the accepted copy and the leaked early release is that the latter has way more theory trying to explain the results. Supposedly the AIAA would only accept the paper if White and his colleagues ditched the quantum vacuum theory and just published the results of their research without trying to explain it."

http://motherboard.vice.com/read/the-fact-and-fiction-of-the-nasa-emdrive-paper-leak

So this is not true. Their pilot wave theory is still in the Discussion.
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
"According to Woodward, who saw a copy of the paper shortly after it had been accepted for peer review, the main difference between the accepted copy and the leaked early release is that the latter has way more theory trying to explain the results. Supposedly the AIAA would only accept the paper if White and his colleagues ditched the quantum vacuum theory and just published the results of their research without trying to explain it."

http://motherboard.vice.com/read/the-fact-and-fiction-of-the-nasa-emdrive-paper-leak

So this is not true. Their pilot wave theory is still in the Discussion.

This theory is very interesting especially since it's compatible with known physics / is nothing else like a different physical interpretation of the equations.
Here is one of my favorite videos about it:


Very impressive is the coordinate probability of detection of the droplet equals a similar quantum system as shown in the video.

« Last Edit: 11/19/2016 02:43 PM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Quote from: X_RaY
Very impressive is the coordinate probability of detection of the droplet equals a similar quantum system as compared, see video at 1:40.

Would that also find application in quantum cypher decoding?
Good question.
There may exist hidden variables which are summarized in the known viewpoint (Copenhagen interpretation). For example there could be a similar process like in the video at much shorter time scales we can not measure yet so that we  see the result as a wave function only with a defined propabillity of detection, not a particle with sharp defined coordinate and velocity at the same time(Uncertainty principle).

On the other hand effects like quantum entanglement seems impossible over large distances in this pic also. ::)
« Last Edit: 11/19/2016 12:18 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Results for Brady cone with HDPE-disc at the small end plate. Source power was defined to be 1Watt (30dBm).
εR=2.27
tanδ=0.00031
DIA*=158.75mm
Height=54mm

* to simplify the model I used a diameter equal to the end plate diameter instead of the 156.7mm reported by EW
X_Ray, we have to be careful with calling this mode shape "TM010".  While that name is correct for an empty cylinder (with no dielectric insert partially filling the cavity), which can have a constant field in the longitudinal direction.  We know that the fields cannot be constant in the longitudinal direction for a cone (even for an empty cone), from the exact analytical solution, because a constant electromagnetic field in the longitudinal direction cannot satisfy the boundary conditions for a cone, (as verified by your FEKO boundary element analysis results). Since the electromagnetic fields are not constant in the longitudinal direction, p is not equal to zero.  So this is a degenerate form of TM010, perhaps we should call it TM01?.  This mode shape becomes TM010 as one varies the cone angle to zero, such that the cone becomes a cylinder.  As the cone becomes a cylinder, the electromagnetic fields become constant in the longitudinal direction.

In particular, it is interesting how the electromagnetic fields, in particular the E field has a gentle gradient at each end, in order to accommodate the boundary conditions at each end, and the fact that the field cannot be constant in the longitudinal direction.

NO dielectric



~~~~~~~~~~~~~~

SMALL END dielectric


~~~~~~~~~~~~~~

BIG END dielectric


« Last Edit: 11/19/2016 06:53 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Results for Brady cone with HDPE-disc at the small end plate. Source power was defined to be 1Watt (30dBm).
εR=2.27
tanδ=0.00031
DIA*=158.75mm
Height=54mm

* to simplify the model I used a diameter equal to the end plate diameter instead of the 156.7mm reported by EW8
X_Ray, we have to be careful with calling this mode shape "TM010".  While that name is correct for a cylinder, which can have a constant field in the longitudinal direction.  We know that the fields cannot be constant in the longitudinal direction for a cone, from the exact analytical solution, because a constant electromagnetic field in the longitudinal direction cannot satisfy the boundary conditions for a cone, (as verified by your FEKO boundary element analysis results). Since the electromagnetic fields are not constant in the longitudinal direction, p is not equal to zero.  So this is a degenerate form of TM010, perhaps we should call it TM01?.  This mode shape becomes TM010 as one varies the cone angle to zero, such that the cone becomes a cylinder.  As the cone becomes a cylinder, the electromagnetic fields become constant in the longitudinal direction.

In particular, it is interesting how the electromagnetic fields, in particular the E field has a gentle gradient at each end, in order to accommodate the boundary conditions at each end, and the fact that the field cannot be constant in the longitudinal direction.

...snip...
Yes I have to agree. Especially in the case where the dielectric is at the big end it causes to make the frustum virtually even more asymmetric due to the contracted wavelength inside the dielectric disc. Thefore the field at the small end tends to zero and the standardnotation for cylindrical cavities doesn't make sense any more.
If my memory serves this problem remains over several threads now, but till now I have no idea for a notation that make more sense. In this regard conclusive ideas are very welcome to solve this issuse!


EDIT
Maybe a notation based on spherical coordinates fits better?
Rectangular cavity --> TX x,y,z
Cylindrical cavity--> TX φ,R,z

sections of a sphere just like Conical cavities --> TX φ,θ,r   or semi-spherical TX φ,R,r

While TE/TM depends on the dominant component into "r" direction?
Does this make sense? ???

Which notation would make sense in a wedge shaped cavity?
Regarding boundary conditions, what if the modal shape inside the small end don't equals at all the shape near the larger end?

hence
x, y = quantum number; m∧n*λ/2, along this axis in cartesian coordinates
z= quantum number, p* λ/2, along this axis in cylindrical or cartesian coordinates
φ= quantum number/number of full wavelength for 360°/λ, around the central axis of symmetry in both cylindrical and sperical coordinates
r= quantum number, radius measured from the apex in sperical coordinates (full quantum number, p*λ/2, similar to "z" in cartesian or cylindrical coordinates)
R= quantum number, radius of a cross section in cylindrical coordinates(full quantum number, R*λ/2)
θ= quantum number, measured against the "r" axis up to the conductive wall like "R" in cylindrical coordinates, but along a shell section at a  constant radius "r" in special coordinates





« Last Edit: 11/22/2016 10:00 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...snip...

Yes I have to agree. Especially in the case where the dielectric is at the big end it causes to make the frustum virtually even more asymmetric due to the contracted wavelength inside the dielectric disc. Thefore the field at the small end tends to zero and the standardnotation for cylindrical cavities doesn't make sense any more.
If my memory serves this problem remains over several threads now, but till now I have no idea for a notation that make more sense. In this regard conclusive ideas are very welcome to solve this issuse!


EDIT
Maybe a notation based on spherical coordinates fits better?
Rectangular cavity --> TX x,y,z
Cylindrical cavity--> TX φ,R,z

sections of a sphere just like Conical cavities --> TX φ,θ,r   or semi-spherical TX φ,R,r

While TE/TM depends on the dominant component into "r" direction?
Does this make sense? ???

Which notation would make sense in a wedge shaped cavity?
Regarding boundary conditions, what if the modal shape inside the small end don't equals at all the shape near the larger end?

Regarding nomenclature for mode shapes, I just checked and my old article had this note  http://forum.nasaspaceflight.com/index.php?topic=39214.msg1526577#msg1526577  reading:
 
 
(**) Mode shape nomenclature is adopted as per the cylindrical cavity (with constant circular cross section) designation, because there is no standardized way to number truncated cone mode shapes.  I am aware that there is no mode shape for a truncated cone with electromagnetic fields constant in the longitudinal direction, unlike cylindrical cavities which have TM mode shapes with "p=0".  Still, because the truncated cone geometries used up to now have shapes that are not too far from a cylinder with constant cross section (because small cone angles are used and the cones are truncated far from the cone vertex) it is possible to use a cylindrical cavity mode shape designation and select m,n,p accordingly.

The problem with us adopting a more logical nomenclature is that at this point in time only you and I probably will understand it, and it would make it difficult to communicate with others.  It would be like you and I communicating in Esperanto (https://en.wikipedia.org/wiki/Esperanto ), rather than English.  We use English with all its flaws (one of the least structured languages) simply because it is spoken and understood by most people  ;)

This mode shape (TM010?) is particularly problematic regarding nomenclature for a truncated cone, unlike a mode shape like TM212 that would not present as much of a problem with interpretation.  I wonder why JPM asked for this particular mode shape...
« Last Edit: 11/19/2016 08:45 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Jean-Philippe,

Quote from: JPMontillet, PM
If you increase the diameter for the large end to cover it completely...


this is the result for the alternative situation with the increased size of the HDPE dielectric. Source power was defined to be 1Watt (30dBm) as before. Height of the dielectric is still 54mm, diameters are modified to fit into the frustum.

The last pic shows that the frequency is even lower due to the increased volume of the dielectric as expected.
Note: dont use the numbers for power of it, but the resonant frequency, because this sim was performed to find the frequency only and the input power was not defined to 1W as for the other plots.

Regards
« Last Edit: 11/20/2016 04:01 PM by X_RaY »

Offline WarpTech

  • Full Member
  • ****
  • Posts: 1113
  • Do it!
  • Vista, CA
  • Liked: 1226
  • Likes Given: 1593
Satisfying Gauss's law and conservation of momentum.


Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Monomorphic, regarding your PM

This is just an example for a almost fitting loop antenna  inside the brady cone without dielectric for TE012 using a 50Ω voltage source in FEKO FEA. Of course i have to run it again because it's still not converged. It shows the way to get impedance match to the frustum.
Wire radius is 1mm. The loop dimension is shown in the pic.
This was just my second try ;) the first version leaded to the same overcoupling as in your model.
 
Best regards

« Last Edit: 11/27/2016 08:31 PM by X_RaY »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
Thanks to X_Ray I'm getting closer to impedance match with a loop antenna. I'm still over coupled.  This goes much more slowly as I have to run 30+ minute sweeps with each iteration. I'm also using spherical end-plate geometry. Running another sweep now.
« Last Edit: 11/27/2016 09:54 PM by Monomorphic »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Looks better now :)
Not exact 50 Ohm but very close.

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
Looks better now :)
Not exact 50 Ohm but very close.

Can you post the antenna location and dims?

Here is my second try. Why is is pointed? How did you get yours to be a nice circle?

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Looks better now :)
Not exact 50 Ohm but very close.

Can you post the antenna location and dims?

Here is my second try. Why is is pointed? How did you get yours to be a nice circle?
It's almost the same as shown above. Distance to the base plate is 19mm. Since you use spherical plates you may change it to the correlated coordinates in your model.

Do not use a very coarse self defined mesh if possible.
My PC runs all night to solve a model ::) need a new one.
« Last Edit: 11/28/2016 04:58 PM by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
...Do not use a very coarse self defined mesh if possible.
My PC runs all night to solve a model ::) need a new one.
Monomorphic, we had discussed previously my viewpoint that the mesh in your FEKO analyses are too coarse. 
This would be an interesting case study where you could perform a convergence analysis to analyze the effect of mesh size on these parameters.  Then you could post it here for posterity so that we can all learn about the dependence of the solution on the mesh size.   ;)
« Last Edit: 11/28/2016 10:07 PM by Rodal »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
...Do not use a very coarse self defined mesh if possible.
My PC runs all night to solve a model ::) need a new one.
Monomorphic, we had discussed previously my viewpoint that the mesh in your FEKO analyses are too coarse. 
This would be an interesting case study where you could perform a convergence analysis to analyze the effect of mesh size on these parameters.  Then you could post it here for posterity so that we can all learn about the dependence of the solution on the mesh size.   ;)

FEKO has the ability to use coarse, standard, and fine meshes. I'm currently using the fine mesh. In addition, FEKO includes the option to solve MoM using higher-order basis functions (HOBF).

"the extension of the method of moments (MoM) and multilevel fast multipole method (MLFMM) in FEKO to include higher-order basis functions (HOBFs) on curvilinear triangular patches. HOBFs reduce the number of unknowns, memory and run-time compared to the traditional planar Rao-Wilton-Glisson (RWG) basis functions."

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
Another run with antenna more towards bottom. Finally starting to get some reflection coefficient numbers that are useful. I hit -12dB on one run but didn't save it. With -5dB, this resonator/antenna system has a Q of 76,406 using the -3dB method. E-field strength is also inching up. I'm still significantly over coupled so lets see what happens when that improves.
« Last Edit: 11/29/2016 01:05 PM by Monomorphic »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Currently I'm solving monomorphics model with spherical end plates with a modified loop antenna, i.e. position and diameter. The current result is shown below. It shows that using the related dimensions leads to a coupling factor below 1.

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5688
  • USA
  • Liked: 5559
  • Likes Given: 4990
Currently I'm solving monomorphics model with spherical end plates with a modified loop antenna, i.e. position and diameter. The current result is shown below. It shows that using the related dimensions leads to a coupling factor below 1.
Are you also using FEKO?

How come you get a perfect circle




instead of the unphysical coarse-sided polygon in Monomorphic's FEKO results?

« Last Edit: 11/30/2016 08:05 PM by Rodal »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Currently I'm solving monomorphics model with spherical end plates with a modified loop antenna, i.e. position and diameter. The current result is shown below. It shows that using the related dimensions leads to a coupling factor below 1.
Are you also using FEKO?

How come you get a perfect circle




instead of the unphysical coarse-sided polygon in Monomorphic's FEKO results?


Yes. I have a copy of his model and was able to run it with the FEKO version I have available. The mesh is a little coarser while searching for the right antenna coordinates and shape but the number of sample frequencys is much higher. When the result looks like impedance match is reached,  I will increase the mesh to the finest possible level whats solvable with the students version of the software. I will share the results when Its done. :)

EDIT
While looking to the raw data I am not sure what the reason is that the curve seems so smooth ???
 Most of the points are not at the circle. It seems the smoothing algorithm works better due to some reason or its not applied if there are too less samples.
« Last Edit: 11/30/2016 09:14 PM by X_RaY »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
Currently I'm solving monomorphics model with spherical end plates with a modified loop antenna, i.e. position and diameter. The current result is shown below. It shows that using the related dimensions leads to a coupling factor below 1.

Can you please double check the dimensions for the antenna? A radius of 15mm seems large and i'm still showing over coupled. Did you mean a diameter of 15mm? That seems more in line with EW's 13.5mm diameter loop antenna.
« Last Edit: 12/01/2016 04:22 PM by Monomorphic »

Offline as58

  • Full Member
  • ****
  • Posts: 668
  • Liked: 208
  • Likes Given: 144
Satisfying Gauss's law and conservation of momentum.

I'm sorry, but I don't understand even the first line. Where does the first equality come from? The second equality is Lorenz (not Lorentz) gauge, but what does the third equality mean? What do you mean by magnetic flux? Through what surface?

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
Currently I'm solving monomorphics model with spherical end plates with a modified loop antenna, i.e. position and diameter. The current result is shown below. It shows that using the related dimensions leads to a coupling factor below 1.

Can you please double check the dimensions for the antenna? A radius of 15mm seems large and i'm still showing over coupled. Did you mean a diameter of 15mm? That seems more in line with EW's 13.5mm diameter loop antenna.
I went back to your original model, solver and mesh configuration.
I found the same result for the suggested antenna position as you have shown above (upper pic). Till now I'm not sure which of the modifications i did lead to the huge different results as shown. ???
For further consistency I will use the settings you have send with the model.
What I found is that the overcoupling problem can be solved as suggested before: move the antenna closer to the big end. Using your config, the z-position should be at ~1cm. (lower pic)

Maybe a finer mesh at/near the antenna is needed so close to the wall?
« Last Edit: 12/04/2016 09:37 AM by X_RaY »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
I don't change anything else but the antenna position, the configuration is still the one, James has send to me.

I tink thats good enough to go with. In a real world construction due to imperfections, in any case the exact antenna position has to be tuned/verified with a VNA. The basic system impedance will  almost never be exact 50Ω and so on...
Nevertheless there will be only a small impedance mismatch that has to tuned out with an external tuner, this minimize the losses that is produced within the tuner (compared to the situation where the impedance difference is much grater).

From a construction viewpoint the position is almost ideal because there are no long wires necessary to feed the loop antenna. That configuration minimize the excitation of other modes inside the frequency range. TE01p is degenerate with TM11p (at least in a cylindrical cavity almost exact at the same frequency, while the conically shape seperates both modes, but not that much regarding to the eigen-frequency)
« Last Edit: 12/02/2016 08:54 PM by X_RaY »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
I don't change anything else but the antenna position, the configuration is still the one, James has send to me.

I tink thats good enough to go with. In a real world construction due to imperfections, in any case the exact antenna position has to be tuned/verified with a VNA. The basic system impedance will  almost never be exact 50Ω and so on...
Nevertheless there will be only a small impedance mismatch that has to tuned out with an external tuner, this minimize the losses that is produced within the tuner (compared to the situation where the impedance difference is much grater).

From a construction viewpoint the position is almost ideal because there are no long wires necessary to feed the loop antenna. That configuration minimize the excitation of other modes inside the frequency range. TE01p is degenerate with TM11p (at least in a cylindrical cavity almost exact at the same frequency, while the conically shape seperates both modes, but not that much regarding to the eigen-frequency)

With the antenna in that configuration I get a huge -39dB reflection coeficient with a Q factor of 111,454!  ;D

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
I don't change anything else but the antenna position, the configuration is still the one, James has send to me.

I tink thats good enough to go with. In a real world construction due to imperfections, in any case the exact antenna position has to be tuned/verified with a VNA. The basic system impedance will  almost never be exact 50Ω and so on...
Nevertheless there will be only a small impedance mismatch that has to tuned out with an external tuner, this minimize the losses that is produced within the tuner (compared to the situation where the impedance difference is much grater).

From a construction viewpoint the position is almost ideal because there are no long wires necessary to feed the loop antenna. That configuration minimize the excitation of other modes inside the frequency range. TE01p is degenerate with TM11p (at least in a cylindrical cavity almost exact at the same frequency, while the conically shape seperates both modes, but not that much regarding to the eigen-frequency)

With the antenna in that configuration I get a huge -39dB reflection coeficient with a Q factor of 111,454!  ;D
Q sounds a little bit high, how is it calculated? f/df (-3dB down from the baseline for S11)?

Yes, I used the -3dB (fc/Δf) method. According to the chart that's 2.40565Ghz/2.15841e-5Ghz

There are tools under the Measure tab for setting markers and getting exact frequencies.
« Last Edit: 12/03/2016 02:43 PM by Monomorphic »

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
Thanks X_RaY and Monomorphic. These sims may prove to be vital to understanding the EMDrive behavior.
I plan to test these loops (along the central axis) in a cylindrical cavity first.
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
X_Ray, have you had any luck with doing a Time Domain Analysis in the 2.4Ghz range? I need to solve for fmin and fmax. Basically we need to figure out how to make the simulated frequency range equal to the time signal bandwidth. In my case that is 2.4054Ghz and 2.4059Ghz.  ???
« Last Edit: 12/04/2016 01:22 PM by Monomorphic »

Offline X_RaY

  • Full Member
  • ****
  • Posts: 754
  • Germany
  • Liked: 936
  • Likes Given: 1952
X_Ray, have you had any luck with doing a Time Domain Analysis in the 2.4Ghz range? I need to solve for fmin and fmax. Basically we need to figure out how to make the simulated frequency range equal to the time signal bandwidth. In my case that is 2.4054Ghz and 2.4059Ghz.  ???

To me it looks like a stand alone tool to analyze spectra of a predefined pulse only.
The resulting graphs (mag,phase,real,imag) can be added to cartesian plots.

Till now I couldn't find a way to apply it to the data.
Still learning  ;)

..second pic is a huge gif-file

http://www.altairuniversity.com/wp-content/uploads/2015/03/UserManual.pdf
section 9.9

« Last Edit: 12/04/2016 06:14 PM by X_RaY »

Offline Monomorphic

  • Full Member
  • ****
  • Posts: 1022
  • United States
    • /r/QThruster
  • Liked: 2370
  • Likes Given: 901
Nice! Similar to some results I had a while back. Notice the counting in ns. This is not TE013, but another mode.

« Last Edit: 12/05/2016 12:40 AM by Monomorphic »

Offline WarpTech

  • Full Member
  • ****
  • Posts: 1113
  • Do it!
  • Vista, CA
  • Liked: 1226
  • Likes Given: 1593
In a PM with @zellerium, we were discussing the software he uses and I started to think about why I wanted it. It just dawned on me that this would be much faster as a team effort. So I would like to make this proposal to those who have modeling capability. We need organization. Dr. Rodal's recent paper shows us how detailed we need to be in every aspect of this analysis, in order to find the right answers. The following list is the "data" that needs to be modeled and recorded. Starting with a frustum. I propose we use TT's design with the spherical end caps, or model Shawyer's latest design, but I'm open to suggestions. The point is, we choose 1 model, and model it to death, to get all the relevant data into a report. I do not mind preparing that report if the modelers can do the real work.

This list for data is based on much of what we've been doing here already, as much as on my own wish list. It is open for discussion and starts like this;

1. Optimal location for the antenna to excite TE013 and have ~50 Ohm input Z.
2. Optimal length, width, shape of the antenna for 50 Ohms.
3. Optimal location for probes to identify frequency and decay time at big end, small end and center wall. i.e., determine Q from the w*t and do VNA at each port.
4. Relative E, H and A, vector field strengths for different materials, PC, Ag, Cu, Al, etc. as a color plot, values & vectors. Everything consistent so they can be compared.
5. Relative Energy Density for different materials, PC, Ag, Cu, Al, etc. as a color plot and values.
6. Relative surface power dissipation for each material, color plot and values.
7. Relative temperature for each material, color plot.

I would put this together into a report or Smartsheet. http://www.smartsheet.com/for each Frustum shape/design/frequency mode. The key to me is to be consistent and thorough with each design, and repeat this for each mode shape and material. It's a lot of work to be thorough!

Dr. Rodal has done real research reports on the truncated cavity last year and now on the Mach effects in the MEGA drive. This type of research takes a lot of time and effort and needs to be coordinated and documented. That's why I want the software so I can do what is needed to advance the cause, but if we work as a team, we can do it together, under budget and ahead of schedule.  ;D

Todd

Offline Peter Lauwer

  • Full Member
  • **
  • Posts: 202
  • Setting up an exp with torsion balance
  • Netherlands
  • Liked: 183
  • Likes Given: 329
In a PM with @zellerium, we were discussing the software he uses and I started to think about why I wanted it. It just dawned on me that this would be much faster as a team effort. So I would like to make this proposal to those who have modeling capability. We need organization. Dr. Rodal's recent paper shows us how detailed we need to be in every aspect of this analysis, in order to find the right answers. The following list is the "data" that needs to be modeled and recorded. Starting with a frustum. I propose we use TT's design with the spherical end caps, or model Shawyer's latest design, but I'm open to suggestions. The point is, we choose 1 model, and model it to death, to get all the relevant data into a report. I do not mind preparing that report if the modelers can do the real work.

This list for data is based on much of what we've been doing here already, as much as on my own wish list. It is open for discussion and starts like this;

1. Optimal location for the antenna to excite TE013 and have ~50 Ohm input Z.
2. Optimal length, width, shape of the antenna for 50 Ohms.
3. Optimal location for probes to identify frequency and decay time at big end, small end and center wall. i.e., determine Q from the w*t and do VNA at each port.
4. Relative E, H and A, vector field strengths for different materials, PC, Ag, Cu, Al, etc. as a color plot, values & vectors. Everything consistent so they can be compared.
5. Relative Energy Density for different materials, PC, Ag, Cu, Al, etc. as a color plot and values.
6. Relative surface power dissipation for each material, color plot and values.
7. Relative temperature for each material, color plot.

I would put this together into a report or Smartsheet. http://www.smartsheet.com/for each Frustum shape/design/frequency mode. The key to me is to be consistent and thorough with each design, and repeat this for each mode shape and material. It's a lot of work to be thorough!

Dr. Rodal has done real research reports on the truncated cavity last year and now on the Mach effects in the MEGA drive. This type of research takes a lot of time and effort and needs to be coordinated and documented. That's why I want the software so I can do what is needed to advance the cause, but if we work as a team, we can do it together, under budget and ahead of schedule.  ;D

Todd

If I can contribute with a few experiments, I'd be happy to do so. I plan to measure with a central loop, as you simulated, but first in a cylindrical cavity. And at a few locations. I have to make a new loop for every location, so the number will be limited.
Material: semi-rigid (RG402), feed-through in the endplate with the sma bus as in the attached picture. The loop itself can be made from the inner conductor of the semi-rigid or from silvered Cu wire.

Peter
Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.   — Richard Feynman

Offline Testingflight

  • Member
  • Posts: 2
  • Georgia USA
  • Liked: 0
  • Likes Given: 0
 Has anyone considered that the thrust from the emdrive found in recent experiments might result from the Coriolis Effect. I have not researched this type of drive and only came across articles about this topic by accident so forgive me if this has been suggested already. In this type drive the microwaves would probably setup vibrations in the drive casing. This vibration might result in an effective force on the support structure as the Earth rotates. This might be interpreted as thrust. The Coriolis Effect is used in vibrating structure gyroscopes and mass flow meters. I haven't thought this out in detail but it seems a simple explanation given the small thrust. It could be tested by changing the drive orientation in relation to the Earth's rotation and by measuring and varying  the casing vibration. I really hope this type of drive is real and physics are found to it explain it's operation bUT it seems a reach.

Offline WarpTech

  • Full Member
  • ****
  • Posts: 1113
  • Do it!
  • Vista, CA
  • Liked: 1226
  • Likes Given: 1593
In a PM with @zellerium, we were discussing the software he uses and I started to think about why I wanted it. It just dawned on me that this would be much faster as a team effort. So I would like to make this proposal to those who have modeling capability. We need organization. Dr. Rodal's recent paper shows us how detailed we need to be in every aspect of this analysis, in order to find the right answers. The following list is the "data" that needs to be modeled and recorded. Starting with a frustum. I propose we use TT's design with the spherical end caps, or model Shawyer's latest design, but I'm open to suggestions. The point is, we choose 1 model, and model it to death, to get all the relevant data into a report. I do not mind preparing that report if the modelers can do the real work.

This list for data is based on much of what we've been doing here already, as much as on my own wish list. It is open for discussion and starts like this;

1. Optimal location for the antenna to excite TE013 and have ~50 Ohm input Z.
2. Optimal length, width, shape of the antenna for 50 Ohms.
3. Optimal location for probes to identify frequency and decay time at big end, small end and center wall. i.e., determine Q from the w*t and do VNA at each port.
4. Relative E, H and A, vector field strengths for different materials, PC, Ag, Cu, Al, etc. as a color plot, values & vectors. Everything consistent so they can be compared.
5. Relative Energy Density for different materials, PC, Ag, Cu, Al, etc. as a color plot and values.
6. Relative surface power dissipation for each material, color plot and values.
7. Relative temperature for each material, color plot.

I would put this together into a report or Smartsheet. http://www.smartsheet.com/for each Frustum shape/design/frequency mode. The key to me is to be consistent and thorough with each design, and repeat this for each mode shape and material. It's a lot of work to be thorough!

Dr. Rodal has done real research reports on the truncated cavity last year and now on the Mach effects in the MEGA drive. This type of research takes a lot of time and effort and needs to be coordinated and documented. That's why I want the software so I can do what is needed to advance the cause, but if