Dr. Rodal:
I'm pretty sure SeeShell's large end plate is also copper backed ceramic (see update #4 images on her gofundme page).
...If both the small end plate and the large end plate of Shell's frustum of a cone are Alumina ceramic on the outside, this may (**), unfortunately, going to make it more difficult to immediately interpret the thermal camera data concerning what mode was really excited in Shell's truncated cone, due to the lower thermal diffusivity of Alumina (as warned by user OnlyMe ). A transient thermal analysis may be necessary to interpret the thermal camera data (**).
The thermal camera infrared will not be measuring the copper temperature but the lower, thermally diffused, temperature of the ceramic (*). If the Alumina was thick enough, two-dimensional diffusion across the surface of the Alumina may diffuse the actual temperature boundaries necessary to properly identify the mode shape.
What is the thickness of the Alumina exterior on the outside of the large end plate in Shell's test ?
PS: I understood fromQuoteBottom 3.17mm plate is 2.5mm thick of Alumina Ceramic, the top plate is 10mm. Bonded .032" .80mm O2 Free Copper.that the small plate Alumina was 2.5 mm thick.
Was the top plate Alumina 10 mm thick ? and the top plate copper was 0.8 mm thick ?
If the Alumina on the top end plate was 10mm thick (or similarly 10mm - 0.8mm =9.2 mm thick), that is 4 (10mm/2.5mm=4, or 9.2/2.5=3.7) times thicker than the Alumina on the small plate.
Recall that the Fourier time (https://en.wikipedia.org/wiki/Fourier_number ) goes like the square of the thickness,
(where alpha is the thermal diffusivity)
so this means that for diffussion time purposes, the Alumina on the top plate effectively diffuses 4^2 = 16 times (or (9.2/2.5)^2= 14 times) more slowly than the Alumina on the small end plate (for the same Fourier number).
_________
(*)
The thermal conductivity of Alumina (Aluminum Oxide) ceramic ranges from 12 to 39 W/m K) (SI units) compared to copper's 385.0 W/(m K) so, it is about 1/10 to 1/30 that of copper.
Similarly, the thermal diffusivity of sintered Alumina is about 1/10 that of copper:
thermal diffusivity of sintered Alumina 0.111× 10^(−4) m^2/s
thermal diffusivity of copper 1.11 × 10^(−4) m^2/s
(**) I haven't run any numbers (except for these dimensional analysis calculations), so can't tell at the moment. Have to wait for more data to be confirmed...
I can't get video from this camera (seek) but the display will show real time data and I plan on videoing it during a run where we can see the increases in thermals in the frustum and the growths of the modes as they form.
Everyone recalls from your microwave oven Owner's Manual (you read that, right?) that you should not operate the oven with it empty. If the RF energy is not absorbed in heating the food, it reflects back to the magnetron, which can then overheat.
An EmDrive looks to me a lot like an empty microwave oven. Where is all the energy going? Clearly some goes into heating the walls of the frustrum, and we are going to meausre that. If there is a force generated, some energy has to go into that, and exactly how much that is will be interesting.
But some is being reflected back. A measurement of VSWR could let us calculate that, so can figure how much net energy goes into the frustrum to do something there.
I can't get video from this camera (seek) but the display will show real time data and I plan on videoing it during a run where we can see the increases in thermals in the frustum and the growths of the modes as they form.Maybe try use a screen recorder instead. Would make for a more accurate recording.
Everyone recalls from your microwave oven Owner's Manual (you read that, right?) that you should not operate the oven with it empty. If the RF energy is not absorbed in heating the food, it reflects back to the magnetron, which can then overheat.
An EmDrive looks to me a lot like an empty microwave oven. Where is all the energy going? Clearly some goes into heating the walls of the frustrum, and we are going to meausre that. If there is a force generated, some energy has to go into that, and exactly how much that is will be interesting.
But some is being reflected back. A measurement of VSWR could let us calculate that, so can figure how much net energy goes into the frustrum to do something there.I know it's only 2D but can give you a feel for microwaves...
http://www.met.reading.ac.uk/clouds/maxwell/
I love the evanescent waves (no energy ha) heating the potato.
http://www.met.reading.ac.uk/clouds/maxwell/microwave_oven.html
Shell
How useful would massive amounts of processing power be to this project? I believe I could build (if time is found) without too much effort, a distributed version of MEEP that would allow some of the project leads to assign different computers in our cluster a specific job to do and then report back the results. We could then distribute the binary to anyone who wants to lend their spare CPU cycles to the project which could become rather substantial as people hear about it.
I guess this comes down to me not understanding exactly what we are looking for from the RF simulation besides resonance. Can the ideal EMDrive be found by iteratively changing parameters and perhaps using some non-linear equation optimization to tweak different parameters of the cavity?
Also, not to beat a dead horse here, but I still think a Slack group would accelerate communication greatly for core members. I love this forum and I think status and results should still be posted, but its a terrible way to have a multi-threaded conversation on different related EMDrive topics. I have limited spare time and scrolling through everyones quotes is very tedious.
Slack lets you
- Have different channels in the same group
- Tag people so they know you have mentioned them
- Direct chat
- Drag and drop files in the channels
- Amazing integrations with outside services
If your interested in joining (just 3 of us at the moment), private message me and I'll send an invite
emdrive.slack.com
Just as a bit of background, I run a software company which has a group, I help direct an outside group as an adviser in another group, and I also was exploring starting an aviation company with another group. I could do all this simultaneously and be very responsive easily because it is all in Slack. It would have been horrendous if it were email or it were in a format like this.
Merry Christmas!
David
Btw, both this request and some other feedback (by tellmeagain and others) made me take another look at these null test runs and, specifically, on the RF on period. A very unscientific and absolutely statistically insignificant examination of individual mid-point values both inside and outside the range suggests there is indeed a null force present, likely on the order of 20..40 uN. Collecting more runs, allowing for a longer RF on period and then applying some formal statistics to it will very likely show this force. Those 10A currents do indeed come with a cost. Still, I hope that if any thrust is ever observed on this build, it will be on the order of at least 100+ uN, and so it would be visible without resorting to statistical methods, just like the force from the electrostatic plates is.
Meep runs up to now have been extremely unrepresentative of actual running times for the EM Drive experiments, so the main goal of massive amounts of processing power would be to run the finite difference Meep solution to times approaching real EM Drive runs (at least in the order of seconds).
This would enable one to understand:
1) is a steady state of resonance approached in the EM Drive experiments or are the EM drive experiments solely a matter of transient response?
2) In either case, what do the electric fields, Poynting vector and stress fields look like for sufficiently long time (seconds) of response?
3) During the extremely short runs of Meep up to now, the Poynting vector and stresses were increasing at an exponential rate. What does the equilibrium balance between Poynting vector field rate and stress gradient look like vs time?
In addition, to the above:
4) Due to insufficient processing power, the stress tensor was only output and processed for the end plates. It would be interesting to output and process the stress tensor throughout the whole cavity, including the side conical walls where the stress tensor components are supposed to balance the net stress calculated at the end plates of the frustum of a cone.
5) Meep plots up to now have only shown the 3D electromagnetic field components at a plane. What is needed is a full 3-D solid electromagnetic field plotting processing capability to show the 3-D electromagnetic field distribution throughout the 3D volume instead of just an arbitrary plane.
In addition to the above:
6) The true beauty of Meep is that it is an open code. As you very well ask, using Meep is questionable because it is based on Maxwell's equations. Given enough interest and given enough processing power one could write subroutines to explore other models incorporating General Relativity formulations (Sachs, etc.) , but I think that would be way into the future, as before approaching that one should complete the above points.
Everyone recalls from your microwave oven Owner's Manual (you read that, right?) that you should not operate the oven with it empty. If the RF energy is not absorbed in heating the food, it reflects back to the magnetron, which can then overheat.
An EmDrive looks to me a lot like an empty microwave oven. Where is all the energy going? Clearly some goes into heating the walls of the frustrum, and we are going to meausre that. If there is a force generated, some energy has to go into that, and exactly how much that is will be interesting.
But some is being reflected back. A measurement of VSWR could let us calculate that, so can figure how much net energy goes into the frustrum to do something there.I know it's only 2D but can give you a feel for microwaves...
http://www.met.reading.ac.uk/clouds/maxwell/
I love the evanescent waves (no energy ha) heating the potato.
http://www.met.reading.ac.uk/clouds/maxwell/microwave_oven.html
Shell
It is fascinating how the Poynting vector (the energy flux: the rate of energy transfer per unit area) forms two vortices on the sides of the potato. Electromagnetic vortices !

The type of "Distributed" computing I am talking about would be a all single computer machines running different simulations, not the same one. Would this be helpful? What timeframe are we looking at with the needed resolution and step count? Weeks? Months?
What you really need is a GPU based solver. SpaceX has some amazing GPU CFD based code (not open source unfortunately) which is truly awe inspiring to me. This is the way to get these sort of answers with a single machine in my opinion but obviously, we don't currently have free access to that sort of code.
- David
The type of "Distributed" computing I am talking about would be a all single computer machines running different simulations, not the same one. Would this be helpful? What timeframe are we looking at with the needed resolution and step count? Weeks? Months?
What you really need is a GPU based solver. SpaceX has some amazing GPU CFD based code (not open source unfortunately) which is truly awe inspiring to me. This is the way to get these sort of answers with a single machine in my opinion but obviously, we don't currently have free access to that sort of code.
- David
Dr. Rodal
A follow up on my last response: There does seem to be an open source project dedicated to GPU based FDTD for electromagnetic simulation. Its called B-Calm. I've found a few papers on it, but I am more familiar with CFD so I don't think I could judge if this would work:
http://sourceforge.net/projects/b-calm/?source=typ_redirect
There is a PDF manual in the archive.
Meep runs up to now have been extremely unrepresentative of actual running times for the EM Drive experiments, so the main goal of massive amounts of processing power would be to run the finite difference Meep solution to times approaching real EM Drive runs (at least in the order of seconds).
This would enable one to understand:
1) is a steady state of resonance approached in the EM Drive experiments or are the EM drive experiments solely a matter of transient response?
2) In either case, what do the electric fields, Poynting vector and stress fields look like for sufficiently long time (seconds) of response?
3) During the extremely short runs of Meep up to now, the Poynting vector and stresses were increasing at an exponential rate. What does the equilibrium balance between Poynting vector field rate and stress gradient look like vs time?
In addition, to the above:
4) Due to insufficient processing power, the stress tensor was only output and processed for the end plates. It would be interesting to output and process the stress tensor throughout the whole cavity, including the side conical walls where the stress tensor components are supposed to balance the net stress calculated at the end plates of the frustum of a cone.
5) Meep plots up to now have only shown the 3D electromagnetic field components at a plane. What is needed is a full 3-D solid electromagnetic field plotting processing capability to show the 3-D electromagnetic field distribution throughout the 3D volume instead of just an arbitrary plane.
In addition to the above:
6) The true beauty of Meep is that it is an open code. As you very well ask, using Meep is questionable because it is based on Maxwell's equations. Given enough interest and given enough processing power one could write subroutines to explore other models incorporating General Relativity formulations (Sachs, etc.) , but I think that would be way into the future, as before approaching that one should complete the above points.
The type of "Distributed" computing I am talking about would be a all single computer machines running different simulations, not the same one. Would this be helpful? What timeframe are we looking at with the needed resolution and step count? Weeks? Months?
What you really need is a GPU based solver. SpaceX has some amazing GPU CFD based code (not open source unfortunately) which is truly awe inspiring to me. This is the way to get these sort of answers with a single machine in my opinion but obviously, we don't currently have free access to that sort of code.
- David
We made a 2048 cycle run of SeeShells' Yang-Shell 6 degree model at resolution = 500 in meep, or rather Quixote did, and it took about two weeks on a single machine. At frequency of 2.45 GHz, 2048 cycles is about 1 micro-second simulated real time. So to reach full seconds of simulated real time would take millennia.
At lower resolution, full seconds could be reached sooner.
ParaView is a nice 3-D viewer of the raw meep output .h5 files. Couple of problems there, too. ParaView is also a computer resource hog, and the .h5 files are to large to easily move from machine to a remote machine.
To my knowledge, several people have looked at implementing the meep code to run on GPU's. No success that I know of but I do know that a commercial FDTD code is available which runs on GPU's. Does not mean that meep can be made to do so however, and as far as I know, the people who looked at the issues did not or could not complete the task.
No one can wait for 1,000's of years for a meep run to complete, speed is needed. Aside from running on GPU's another option would be to re-code the fixed lattice in meep so that only the actual structures utilized high resolution while the relatively vast areas of empty space within the computational lattice were propagated at much lower resolution. Imbfan has looked into this. The conclusion was that re-coding this core aspect of meep was beyond her skill level and most likely all but professional C++ programmers with E&M background.
It seems to me, that unfortunately, we need another 30 years of computer speed advancing like it did during the 30 years.
If you watch that SpaceX video, they actually implement the dynamic use of space within the GPU based on how much the rate of change in the area. The end result is an incredibly dense grid of voxels only where needed and an incredibly complex simulation running in a day on a single computer. Now all we just need to do is convince Elon to open source it.
If you watch that SpaceX video, they actually implement the dynamic use of space within the GPU based on how much the rate of change in the area. The end result is an incredibly dense grid of voxels only where needed and an incredibly complex simulation running in a day on a single computer. Now all we just need to do is convince Elon to open source it.
I'm fairly sure there is zero chance they will release the source code for software that's explicitly engineered to model the flow of hot, supersonic gasses. Rocket science is the very definition of weapons related and export controlled technology.
A 30 times increase is speed would prove very useful to propagate meep to RF steady state. Not thermal steady state though. Perhaps a combined meep with variable resolution (step size) running on GPu's would begin to address the problem because 1000 years divided by 30 is still 30 years per second of simulated real time.
I do know that commercial FDTD codes with variable resolution are also available, so that also can be done.
Interestingly, related to the speed test results from Stanford. my computer runs on an AMD Phenon(TM) II 840T quad core engine. One thing about this engine is that it incorporates AMD's vector math built-in. That feature is not common and as I understand it, no longer available from AMD, but I think it helps speed meep calculations.
And doing the calculation 14 days per microsecond actually equals 38,356 years per second. That is the problem.