Author Topic: EM Drive Developments - related to space flight applications - Thread 3  (Read 3130613 times)

Offline SeeShells

  • Senior Member
  • *****
  • Posts: 2442
  • Every action there's a reaction we try to grasp.
  • United States
  • Liked: 3186
  • Likes Given: 2708
I have big girl toys to build and a shop to get ready ... as one PM email said "Get er Done"!

Thanks I enjoy our chats... sometimes. ;)

Shell

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
All this talk of Q is interesting, and it does concern me in the sense that we need to see long runs from meep but the question to me becomes, "How long should the run be?" And there, the definition of Q becomes a problem. I have attached an image of what Wikipedia says about it. Which one are we using?

The problem, as it looks to me is that it will take several thousand cycles for the energy stored to reach steady state, everything before that is transient. So again, "How long should the run be?"

The generic calculation formulas you posted from Wikipedia, are not an issue, as they give similar results. The issue, as excellently discussed by zen-in, rfcavity, rfmwguy and others has to do with how to experimentally measure power and calculate loaded Q based on S11 (and/or S21 -there are papers on figuring out loaded Q using both S11 and S21).

Dr.  Notsosureofit who has very long-term experience in this field, has posted that he has always used phase rather than return loss for calculating Q.

Yang nstead of using the S11 zero dB reference plane to measure their -3dB down bandwidths from, as is done by Paul at NASA, uses the most negative dB S11 value located at the resonance frequency and measure up 3dB toward the S11 zero  dB plane.  Therefore, of course, the bandwidth figures used by the Chinese in this unorthodox calculation are going to be ridiculously small which yields correspondingly artificially large values of the calculated Q-factor.

The issue of how long you would have to march the Finite Difference solution to reach steady-state (if steady-state is achievable with the RF feed on ) can only be addressed by solving the transient solution for a truncated cone cavity, and that cannot be done exactly because there is no such exact solution.  Others have posted that you would have to get close to a microsecond.  This means that you would have to have from 1 to 2 orders of magnitude greater number of time steps, as your present run represents only 0.013063 microseconds of transient response.

That means that instead of the 6,527 time steps now,  you would need from 65,000 to 650,000 FD time steps.

By the way,  it is known in the numerical solution literature than experimental Q's are usually much lower than calculated Q's, as losses are usually underestimated.
« Last Edit: 07/12/2015 03:16 am by Rodal »

Offline deltaMass

  • Full Member
  • ****
  • Posts: 955
  • A Brit in California
  • Liked: 671
  • Likes Given: 275
Excuse my ignorance of the basic algorithm used in this Finite Difference technique, but is it the case that, as the name implies, total simulation error accumulates with the number of cycles simulated?  That being so, is there a way of knowing how many cycles it takes before the error causes significant non-physical divergence of the simulation results? I ask because a few hundred thousand cycles might be out of reach for that reason.

Offline Flyby

  • Full Member
  • ***
  • Posts: 388
  • Belgium
  • Liked: 451
  • Likes Given: 48
This is what I'm using...
0.0625 holes with 3/32 stagger spacing copper perforated sheet ~.020 thick. Have thicker on order but needs to be made.

.0625 inch hole works out to 188.9 GHz wavelength.
I understand your reason to choose for the mesh, but..

I'm just worried about how the mesh will react to the accumulated heat inside.
IIRC, Shawyer did have several burn trough's with a full metal frustum..
The narrow material pathways between the holes might burn through a lot quicker...

So... keep your distances and keep those fingers on the "off" switch...

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
Excuse my ignorance of the basic algorithm used in this Finite Difference technique, but is it the case that, as the name implies, total simulation error accumulates with the number of cycles simulated?  That being so, is there a way of knowing how many cycles it takes before the error causes significant non-physical divergence of the simulation results? I ask because a few hundred thousand cycles might be out of reach for that reason.

Stability errors were addressed early by Courant and finally in the magnus-opus of Lax and Richtmyer in the middle '50s  (attached below) .  See for numerical accumulation errors: http://link.springer.com/article/10.1007%2FBF01398460

Problems requiring hundreds of thousand of finite difference time steps have been tackled since WWII for DoD purposes successfully.  Once stability of the FD scheme has been addressed, and one is working with double precision it is mainly an issue that differences of small magnitude are avoided.  This is an important reason that Meep uses a non-dimensional scheme: to avoid such errors.  If errors occur they will be apparent as the solution will look fractal, which is an immediate give away of a numerical issue.  I think the main issue with FD schemes is to address FD stability and FD mesh convergence.  There is an intimate connection between stability and the growth of round-off errors, which was also discussed in other publications by Lax.
« Last Edit: 07/11/2015 07:08 pm by Rodal »

Offline TheTraveller

All this talk of Q is interesting, and it does concern me in the sense that we need to see long runs from meep but the question to me becomes, "How long should the run be?" And there, the definition of Q becomes a problem. I have attached an image of what Wikipedia says about it. Which one are we using?

The problem, as it looks to me is that it will take several thousand cycles for the energy stored to reach steady state, everything before that is transient. So again, "How long should the run be?"

The generic calculation formulas you posted from Wikipedia, are not an issue, as they give similar results. The issue, as excellently discussed by zen-in, rfcavity, rfmwguy and others has to do with how to experimentally measure power and calculate loaded Q based on S11 (and/or S21 -there are papers on figuring out loaded Q using both S11 and S21).

Dr.  Notsosureofit who has very long-term experience in this field, has posted that he has always used phase rather than return loss for calculating Q.

Yang instead of using the S11 zero dB reference plane to measure their -3dB down bandwidths from, as is done elsewhere, uses the most negative dB S11 value located at the resonance frequency and measure up 3dB toward the S11 zero  dB plane.  Therefore, of course, the bandwidth figures used by the Chinese in this unorthodox calculation are going to be ridiculously small which yields correspondingly artificially large values of the calculated Q-factor.

The issue of how long you would have to march the Finite Difference solution to reach steady-state (if steady-state is achievable with the RF feed on ) can only be addressed by solving the transient solution for a truncated cone cavity, and that cannot be done exactly because there is no such exact solution.  Others have posted that you would have to get close to a microsecond.  This means that you would have to have from 1 to 2 orders of magnitude greater number of time steps, as your present run represents only 0.013063 microseconds of transient response.

That means that instead of the 6,527 time steps now,  you would need from 65,000 to 650,000 FD time steps.

By the way,  it is known in the numerical solution literature than experimental Q's are usually much lower than calculated Q's, as losses are usually underestimated.

Even since I was 10, tearing apart and rebuilding scrap Korean War transceivers (taught myself electronics) to convert them to the ham bands and making more money than my dad, I knew to calculated a LC circuits Q as -3db down from the peak as attached.

Never have I heard that you calculated Q 3db up from the zero reference level. That is just nuts, You want to know the bandwidth from the peak of the Q, which as Wikipedia says:

Quote
In physics and engineering the quality factor or Q factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is, as well as characterizes a resonator's bandwidth relative to its center frequency.

https://en.wikipedia.org/wiki/Q_factor



« Last Edit: 07/11/2015 07:09 pm by TheTraveller »
It Is Time For The EmDrive To Come Out Of The Shadows

Offline rfmwguy

  • EmDrive Builder (retired)
  • Senior Member
  • *****
  • Posts: 2205
  • Liked: 2713
  • Likes Given: 1134
This was discussed back in May:


http://forum.nasaspaceflight.com/index.php?topic=36313.msg1369553#msg1369553

Quote from: Rodal
Paul March has addressed  and explained this as follows: Chinese (Prof. Yang) calculated loaded Q factors are much higher than the Q's reported by Shawyer and by NASA' Eagleworks because of the unorthodox way that the Chinese calculate their loaded Q factors.  Instead of using the S11 zero dB reference plane to measure their -3dB down bandwidths from, as is done elsewhere, the Chinese use the most negative dB S11 value located at the resonance frequency and measure up 3dB toward the S11 zero  dB plane.  Therefore, of course, the bandwidth figures used by the Chinese in this unorthodox calculation are going to be ridiculously small which yields correspondingly artificially large values of the calculated Q-factor. .
Here is where they went wrong...under no industrial RF standard does anyone measure Q on return loss, S11. It is done on S21, forward power in the frequency domain for cavities. I stand by my claim that "Specsmanship" was used to create an unnaturally large Q, either by unfamiliarity or intent.

Note that S21 requires a 2 port measurement, input and output (note the sampling port on the frustums will provide the output). I'd bet a six-pack of craft beer that realistic Qs are in the 4 digit range for both shawyer and yang. And yes Doc, Yang should have used the -3dB points below 0 insertion, not -3dB above best return loss...not RF types IMHO.


Offline TheTraveller

This was discussed back in May:


http://forum.nasaspaceflight.com/index.php?topic=36313.msg1369553#msg1369553

Quote from: Rodal
Paul March has addressed  and explained this as follows: Chinese (Prof. Yang) calculated loaded Q factors are much higher than the Q's reported by Shawyer and by NASA' Eagleworks because of the unorthodox way that the Chinese calculate their loaded Q factors.  Instead of using the S11 zero dB reference plane to measure their -3dB down bandwidths from, as is done elsewhere, the Chinese use the most negative dB S11 value located at the resonance frequency and measure up 3dB toward the S11 zero  dB plane.  Therefore, of course, the bandwidth figures used by the Chinese in this unorthodox calculation are going to be ridiculously small which yields correspondingly artificially large values of the calculated Q-factor. .
Here is where they went wrong...under no industrial RF standard does anyone measure Q on return loss, S11. It is done on S21, forward power in the frequency domain for cavities. I stand by my claim that "Specsmanship" was used to create an unnaturally large Q, either by unfamiliarity or intent.

Note that S21 requires a 2 port measurement, input and output (note the sampling port on the frustums will provide the output). I'd bet a six-pack of craft beer that realistic Qs are in the 4 digit range for both shawyer and yang. And yes Doc, Yang should have used the -3dB points below 0 insertion, not -3dB above best return loss...not RF types IMHO.

Simple to understand paper on S21 Q measurements (which is -3db down from the peak):
http://www.beehive-electronics.com/articles/Resonator%20measurements%20with%20non-contact%20probes%201.0.pdf
« Last Edit: 07/11/2015 07:20 pm by TheTraveller »
It Is Time For The EmDrive To Come Out Of The Shadows

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
This was discussed back in May:


http://forum.nasaspaceflight.com/index.php?topic=36313.msg1369553#msg1369553

Quote from: Rodal
Paul March has addressed  and explained this as follows: Chinese (Prof. Yang) calculated loaded Q factors are much higher than the Q's reported by Shawyer and by NASA' Eagleworks because of the unorthodox way that the Chinese calculate their loaded Q factors.  Instead of using the S11 zero dB reference plane to measure their -3dB down bandwidths from, as is done elsewhere, the Chinese use the most negative dB S11 value located at the resonance frequency and measure up 3dB toward the S11 zero  dB plane.  Therefore, of course, the bandwidth figures used by the Chinese in this unorthodox calculation are going to be ridiculously small which yields correspondingly artificially large values of the calculated Q-factor. .
Here is where they went wrong...under no industrial RF standard does anyone measure Q on return loss, S11. It is done on S21, forward power in the frequency domain for cavities. I stand by my claim that "Specsmanship" was used to create an unnaturally large Q, either by unfamiliarity or intent.

Note that S21 requires a 2 port measurement, input and output (note the sampling port on the frustums will provide the output). I'd bet a six-pack of craft beer that realistic Qs are in the 4 digit range for both shawyer and yang. And yes Doc, Yang should have used the -3dB points below 0 insertion, not -3dB above best return loss...not RF types IMHO.
Wish we had Paul in the thread,  to discuss the NASA measured Q's.  Meanwhile here is a reference using one port S11 (unloaded Q):

http://www.engineering.olemiss.edu/~eedarko/experience/rfqmeas2b.pdf

http://www.jpier.org/PIER/pier127/29.12032613.pdf

S21 for loaded Q by Petersan:

http://arxiv.org/pdf/cond-mat/9805365

MY CONCLUSION SO FAR:


1) No IEEE or other organization standards on how to measure Q's

2) EM Drive authors report Q's using different methods, often not detailing what method is used (or even whether the loaded or unloaded Q is reported)

3) EM Drive researchers would be well-advised to report in detail how their Q is calculated, when reporting quality factors
« Last Edit: 07/12/2015 03:13 am by Rodal »

Offline apoc2021

  • Member
  • Posts: 18
  • Liked: 37
  • Likes Given: 27
@aero et al - regarding longer MEEP runs

If we can figure out how to offload the MEEP computation to the cloud, I would be willing to contribute server time (within reason, of course). I suspect others may be willing to donate server time, too.

Have you heard of the FEFF project? Link here: http://www.feffproject.org/feffproject-scc-caseexamples-meep.html. Seems they've done some of the heavy lifting to link MEEP and Amazon AWS. Might be helpful..

Offline rfmwguy

  • EmDrive Builder (retired)
  • Senior Member
  • *****
  • Posts: 2205
  • Liked: 2713
  • Likes Given: 1134
For those who want to learn Filter terminology from a leader in this biz, go here:

http://www.klmicrowave.com/catalog/KLCat02.pdf

I consulted for this company and competed with them on many occasions...they are the defacto filter experts up to 18 GHz in mil/aero. Notice they use Q in their Terms and Definitions for bandpass and notch filters, it is useless for low and highpass filters. This is the industrial standard I was speaking of f0/3dB BW...

A frustum cavity of any size and shape is bandpass...assymetrical, but its bandpass and Q should be measured at its center frequency divided by its total 3dB bandwidth. And the Qs will remain below 5 digits in real life.

Offline ElizabethGreene

  • Member
  • Posts: 69
  • Nashville, Tennessee
  • Liked: 138
  • Likes Given: 3
I will state my case again:

Why does the waveguide industry NOT make cavities of copper mesh?

The EMDrive is a tapered waveguide.

They are, in situations where extreme weight restrictions are in place.  The difficulty with using a mesh is that there is some leakage of energy into space through the mesh.

Theoretically it would be fine to use a mesh waveguide for a bunch of different things, but inevitably you'll have some high-quality "enguneer" that zip-ties a bunch of signal cables to the holy waveguide to make it tidy.  Then you'll spend 3 months trying to figure out why you've got a bunch of spurious noise.

You can measure this effect on your home microwave.  Compare the rf level at 1 cm from the mesh part of the door to 1cm from a solid wall.

Offline zen-in

  • Full Member
  • ****
  • Posts: 541
  • California
  • Liked: 483
  • Likes Given: 371

...

I will state my case again:

Why does the waveguide industry NOT make cavities of copper mesh?

The EMDrive is a tapered waveguide.

The problem with copper mesh, if it is actually woven copper screen, is the poor conductivity between wires and so is not isotropic.    I haven't seen Copper screen that has been spot welded and I suspect it is not made because it is too difficult to get a reliable spot weld and welding with an eutectic alloy would lead to disimilar metal problems.   In a woven screen the wires just contact each other with little surface area so the impedance to RF is high.    An example that was discussed earlier described concentric rings at the end of a circular waveguide.   This would have much lower impedance.   Copper sheet with holes punched in it is isotropic and would also have a low impedance.   If the diameter of the holes are below the cutoff frequency only evanescent (near field) radiation would be detected on the other side of it.
« Last Edit: 07/11/2015 08:32 pm by zen-in »

Offline deltaMass

  • Full Member
  • ****
  • Posts: 955
  • A Brit in California
  • Liked: 671
  • Likes Given: 275
On sealed and closed cavities vs. open cavities, both in air and containing air.

The sealed cavity will exhibit a buoyancy effect because increased temperature will cause the enclosed air to exert wall pressure and cause slight ballooning of the cavity walls. For example, a sealed thin aluminium soft drink can when heated a few tens of degrees will exhibit on order 50 ug-w of buoyancy.

The open cavity will lose heated air in order to maintain constant pressure with the outside. Thus its weight will decrease because a volume of heated air weighs less than that same volume of colder air.

Both effects cause an apparent loss in weight. It's to be expected that an open cavity will produce a bigger weight loss than a sealed cavity, because of the high stiffness of the sealed cavity.

This weight loss can readily be factored out by either
a) measuring thrust in the horizontal direction, or
b) differencing the measured weights with thrust-downward resp. thrust-upward
« Last Edit: 07/11/2015 08:58 pm by deltaMass »

Offline aero

  • Senior Member
  • *****
  • Posts: 3628
  • 92129
  • Liked: 1145
  • Likes Given: 360
@aero et al - regarding longer MEEP runs

If we can figure out how to offload the MEEP computation to the cloud, I would be willing to contribute server time (within reason, of course). I suspect others may be willing to donate server time, too.

Have you heard of the FEFF project? Link here: http://www.feffproject.org/feffproject-scc-caseexamples-meep.html. Seems they've done some of the heavy lifting to link MEEP and Amazon AWS. Might be helpful..

Thanks. If someone has done it, that means running meep on the cloud is possible. Good to know.

Add: There are two issues with meep and computers. One is the CPU time required for long runs at modest resolution and/or small lattices. The other issue is the high memory requirements of for any run with high resolution or larger lattices. The minimum lattice size is set by the model geometry but when investigating field energies outside of the model, the lattice size must include area outside the cavity model plus a boundary layer area around it. And there is again the factor of 8 memory increase for each doubling the lattice dimensions.

It is easy to imagine that the cloud could provide sufficient memory but it does come down to "How much memory is addressable by each processor"?

For the current case of long runs with a small lattice and modest resolution, memory should not be an issue. And run time might not be an issue for some. It is for me because I am required (by the wife) to turn my computer off at night. With 32 complete cycles requiring just less than 1 hour run time, 100 cycles should complete in less that 3 hours but 1000 cycles would exceed my allotted run window of about 14 hours maximum. A 30 hour computer run is not that bad in a laboratory environment. It's just to much for me to do at home. If I can learn how to start meep from saved data, that may be an easy solution. But if it is decided that 10,000 cycles are needed, well, that's not so easy.
« Last Edit: 07/11/2015 09:27 pm by aero »
Retired, working interesting problems

Offline X_RaY

  • Full Member
  • ****
  • Posts: 852
  • Germany
  • Liked: 1146
  • Likes Given: 2479
Somebody posted this a couple of hours ago in another EM Drive forum:

<<You need to ask yourself why copper mesh is not used in the microwave industry to build waveguides? Would be heaps lighter, lower weight and cost.
Might be because mesh it is good at absorbing microwave energy that strikes it but bad at reflecting / propogating microwave energy that strikes it.
The inside of your cavity needs to be very highly polished, ding & scratch free rigid copper that reflects and propogates microwave energy with VERY little energy loss, instead of absorbing the energy and turning it into heat.>>

That's not correct, actually in the aerospace industry the use of a mesh is quite common:

" Why is my Satellite Dish full of holes?" http://www.thenakedscientists.com/forum/index.php?topic=16208.0

"A study of microwave transmission perforated flat plates" http://ipnpr.jpl.nasa.gov/progress_report2/II/IIO.PDF

Wikipedia <<With lower frequencies, C-band for example  (IEEE C 4 – 8 GHz), dish designers have a wider choice of materials. The large size of dish required for lower frequencies led to the dishes being constructed from metal mesh on a metal framework. At higher frequencies, mesh type designs are rarer though some designs have used a solid dish with perforations>>

http://www.yldperforatedmetal.com/Perforated-Metal-Screen.htm

Even if the mesh width is small for a given wavelength the Q will be a little bit lower than in the full metal case.
The shielding with mesh isn't perfect and maybe there are currents at the outside also. That could be resulting into radiation from the whole cavity outside into free space. ???
I got experimental data for the use of a Cu- mesh only for one endplate. Qualitative by using of only 1 layer of mesh the Q factor of a given (H) resonance is smaller than with 2 layers (both with a mesh wide  ~1/12 Lambda).

and:

Hans A. Wolfsperger
Elektromagnetische
Schirmung
Theorie und Praxisbeispiele
e-ISBN 978-3-540-76913-2

Translation Page 269.

"The attenuation at hole coupling is proportional 10 log (1 / r ^06)
= 20 log (1 / r ^03) = 60log (1 / r ^0). That is, a doubling of the hole
diameter reduces the shielding effectiveness of 18.1 dB"
  :)
« Last Edit: 07/11/2015 09:12 pm by X_RaY »

Offline Rodal

  • Senior Member
  • *****
  • Posts: 5911
  • USA
  • Liked: 6124
  • Likes Given: 5564
The pros and cons of a perforated mesh:

PROS:

* reduces weight (very important for aerospace applications and for EM Drive testing)
* reduces wind resistance effects (very important for large satellite dishes and for EM Drive testing to prevent gas effect that has plagued microwave pressure experiments since Maxwell's times, and as demonstrated on the first successful experiment to accurately measure microwave pressure, by Dr. Cullen in his Ph.D. thesis)
* visibility of what is happening inside the microwave cavity
*it prevents a microwave sealed cavity from becoming a pressure vessel as moist air inside it heats up and therefore pressure increases as PV=nRT (important for EM Drive experiments where an exhaust jet may be produced)
* it diminishes buoyancy effects (important for EM Drive experiments) See deltaMass's's post for more comprehensive discussion http://forum.nasaspaceflight.com/index.php?topic=37642.msg1403300#msg1403300
* it prevents liquids like water (rain, snow, etc.) to collect inside (hat tip Shell)
* reduced Eddy-Current losses due to reduced surface area exposed to the magnetic field. See rfmwguy http://forum.nasaspaceflight.com/index.php?topic=37642.msg1403367#msg1403367

CONS:

* perforation has to be significantly smaller than the microwave wavelength.  See X-Ray's post for more discussion: http://forum.nasaspaceflight.com/index.php?topic=37642.msg1403303#msg1403303
* perforation reduces stiffness, and therefore perforated mesh is more subject to distortion
* durability (the reduced stiffness of perforated plates means that eventually they will get distorted by handling stresses, this is the main reason why waveguides are not made of perforated meshes, as waveguides usually weigh little and durability concerns vastly exceed the benefits of weight saving for a waveguide)
* conductivity between wires and possibly anisotropy of a wire mesh, impedance in perforated plates.  See zen-in's post:  http://forum.nasaspaceflight.com/index.php?topic=37642.msg1403296#msg1403296
* spurious noise and other issues from some energy leakage.  See ElizabethGreene's post http://forum.nasaspaceflight.com/index.php?topic=37642.msg1403288#msg1403288

the following COULD BE A CON OR A PRO depending on input power going into heat and time length of operation:

* CON: perforation means less thermally conductive metal to act as a heat sink, on the other hand the PRO: open perforation acts as a means to get convective heat transfer through the holes, so the con of reduced heat sink has to be compared with the benefits of convective heat transfer.  It basically depends on the thickness  (thermal diffusivity is most effective in the thickness direction than in lateral directions).  A thick non-perforated plate should be better than a thin perforated plate since thermal diffusivity through a metal is much, much faster than thermal convection, therefore the benefits of a thick plate will outweigh the benefits of a perforated thin sheet until enough heat is absorbed in the thick plate at which point thermal convection benefits of the perforated plate may outweigh the benefits of a thick non-perforated plate (depends on the speed of convection).

In outer space (vacuum) there is no thermal convection whatsoever, (hat-tip aero for reminding us of that) therefore the thermal sink advantage of a non-perforated plate are even more significant and have to be balanced against the weight savings for payload weight reduction from a perforated plate.
« Last Edit: 07/12/2015 12:32 am by Rodal »

Offline mwvp

  • Full Member
  • **
  • Posts: 267
  • Coincidence? I think Not!
  • Liked: 175
  • Likes Given: 31
Microwave magnetrons can have a +-30MHz bandwidth. If you designed a cavity bandwidth wide enough to suck all that up, the Q would be around 41.

Perhaps you missed my earlier post:

http://forum.nasaspaceflight.com/index.php?PHPSESSID=43unoij870akptr8943h1chf64&topic=37642.msg1403044#msg1403044

Mr. Coulter is getting 50 - 100KHz BW. Done right, the maggie may obligingly mode-lock for you.

Offline mwvp

  • Full Member
  • **
  • Posts: 267
  • Coincidence? I think Not!
  • Liked: 175
  • Likes Given: 31
I'm just looking for a possible hybrid mode on cavity. But it is not so easy.
There are some possibles candidates on Nasa's paper.
Some formulas of sensitivity can be used for adjust the dimensions of cavity and to control the frequencys.
Very cool.
By the way. In corrugated waveguides, hybrid modes have very low losses.  In cavity, perhaps  they produce more higher Q.

Interesting. I'd like to see a simulation of what a doppler shift does to the RF in the cavity, forward/reverse paths mode-split or something. If it takes an hour to FDTD simulate a few dozen nanoseconds, is it 100 years to simulate a dozen milliseconds?

Oh well.

There's another graph that was posted here, showing frequency and modes "Shawyer Conical Resonant Cavity Modes-2 (4).jpg". Couldn't find it using search, here it is again:


« Last Edit: 07/11/2015 10:16 pm by mwvp »

Offline aero

  • Senior Member
  • *****
  • Posts: 3628
  • 92129
  • Liked: 1145
  • Likes Given: 360
I have a question for the theorists.

Isn't it our objective to pack as much energy into the cavity as we can just to see the real world physical effect?

So lets imagine a cavity made of "Unobtainium" that will not melt or deform under any circumstance. And let it have an infinite Q for good measure. At what energy level do "Known" things start to happen within that cavity? Doesn't it start to create electrons and perhaps other particles? At what energy does it start to create gravitons, or will the electron creation drain energy to the point that the graviton creation energy levels can't be reached? Higgs particles if you prefer.
Retired, working interesting problems

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0