Some time back I sent EW a paper on maintaining active tune resonance using a microwave PLL. I don't imagine anyone here has the resources for that, though. Which is why using very high Q is going to be problematic.
Charging again without substance?Annus Veritas means the year in which we will find out the truth:1) You are the one that claimed "all doubts will be removed" this year (you referring to the re-publication of Shawyer's 2014 Conference paper)2) With the great testing set-up of rfmwguy, SeeShells and others, I have great expectations that we will learn a lot from rfmwguy, SeeShells and others.3) You have claimed that you will be reporting on tests too4) With Prof. Tajmar reporting tests in a vacuum we will now have two prestigious institutions (NASA and TU Dresden, Germany) reporting on EM Drive tests in vacuum (something that neither Shawyer or Yang ever reported: not a single test in vacuum)5) Hopefully NASA will report later in the year as to the progress of their tests?So I expect 2015 to be an important year to learn about the EM Drive, yes, "Annus Veritas"
Assuming that the control algorithm can adequately predict the future behavior of the actual process based on historical feedback. Control operates based on feedback, which implies a time lag between feedback input and actuator output. Whether the control will be successful is predicated on how well can the control algorithm model and actively adapt to the process without being unstable or lagging.
Are Photons Degraded after they:1) bounce off a non moving end plate, EMDrive in Idle mode?...
Quote from: Eer on 07/20/2015 01:34 pm..To be clear:2) Agreed. New results needs to be able to be matched to previous runs. The issue with different number of rows and columns was to document a disconnect in specifications of what version of file output I was attempting to compare my runs against - it seems the file I thought I was supposed to use was not, in fact, a comparable run, and as a result I spent three days trying to make my output match the previous one. Thus my suggestion that control files used to create output files always be provided with those output files in the future. That, at least, should allow follow-on efforts to re-run the control file and verify the output files associated with it.3) Agreed. However, the use of files other than csv formats may be necessary when collecting multiple output files into a single package file to support direct comparisons of cell values across runs. It's clumsy, but better than using file references for links between multiple csv files. So consider it an artifact of the post-processing analysis like any other tool you might use. CSV is a simple, easy to review, widely supported standard data interchange format that we should use for sharing data among researchers. The alternative is to use versions of MEEP/HDF5 which store binary data in canonical format that is not machine independent, and I think that's not worth while.EdI agree that it would be helpful to have unadulterated MEEP INPUT control files used to create OUTPUT files be provided with output files.The Meep input control file controls the Meep analysis, thus it is all one needs to run the analysis, "it is all you need". More or different is not better because it would be subject to interpretation. Same as providing numerical data in engineering drawings: redundant information is not better. Extra information should be provided as comments. The Meep input control files could explained, with comments, as much as necessary but they should never be substituted by any other type of input description that may be subject to interpretation or translation issues.Ditto for the MEEP OUTPUT information. To post-process the data, the actual output information from Meep is needed:* the total Meep time (the computer run time is completely irrelevant to post-processing),* the total number of Meep time slices* the total number of Meep time stepsetc.It would be helpful to have both the MEEP INPUT control file and the MEEP OUTPUT file information referred to above be provided as .txt files in the same Google Drive folder where the csv files are provided, for easy reference to understand what is the input and output associated with the csv files.This is all part of formalizing a collaboration between multiple parties.
..To be clear:2) Agreed. New results needs to be able to be matched to previous runs. The issue with different number of rows and columns was to document a disconnect in specifications of what version of file output I was attempting to compare my runs against - it seems the file I thought I was supposed to use was not, in fact, a comparable run, and as a result I spent three days trying to make my output match the previous one. Thus my suggestion that control files used to create output files always be provided with those output files in the future. That, at least, should allow follow-on efforts to re-run the control file and verify the output files associated with it.3) Agreed. However, the use of files other than csv formats may be necessary when collecting multiple output files into a single package file to support direct comparisons of cell values across runs. It's clumsy, but better than using file references for links between multiple csv files. So consider it an artifact of the post-processing analysis like any other tool you might use. CSV is a simple, easy to review, widely supported standard data interchange format that we should use for sharing data among researchers. The alternative is to use versions of MEEP/HDF5 which store binary data in canonical format that is not machine independent, and I think that's not worth while.Ed
...I do not recommend using Google Drive for this. We'll be dealing with small text files (typical csv output size is 1.4MB, control files are only a few K), possibly hundreds of them. A source control repository is a much better solution for our use case. I can set up a git repository on my server if everyone is comfortable using git and SSH with public key logins, or we can use github. If I do host it on my server, I have registered the domain name emdrive.science for our purposes.
I suspect everyone has seen some version of Escape Dynamic's microwave-powered shuttle at this point, but just in case, here's a short (and badly written) article that includes a nice bit of embedded animation. It's only vaguely related, but hey, it is a spacecraft proposal, and it is using microwaves, so... here you go.http://www.engadget.com/2015/07/20/escape-dynamics-microwave-spacecraft/As several people have pointed out, the energy losses in using the kind of microwave sources that ED is proposing would seem to be pretty daunting. Of course, much could be solved if they could instead smack their shuttle with Masers.Side benefit: developing the requisite high Q-factor microwave cavities necessary for building all those big Masers might give an opportunity to test... some other theories.
http://escapedynamics.com/technology/hpm-2/500 KW CW @92 GHz good enough?
I have a further suggestion: make all files public-domain. Specially all .ctl files.That way anyone can benefit from the work without any kind of bureaucracy.Public-domain helps the advancement of science.
...I do not recommend using Google Drive for this. We'll be dealing with small text files (typical csv output size is 1.4MB, control files are only a few K), possibly hundreds of them. A source control repository is a much better solution for our use case...
Quote from: leomillert on 07/20/2015 07:31 pmI have a further suggestion: make all files public-domain. Specially all .ctl files.That way anyone can benefit from the work without any kind of bureaucracy.Public-domain helps the advancement of science.Completely agree - repo should be publicly browseable, with accounts only needed for people committing. And ignore my "Men In Black" bad joke...If you want me to set this up on the wiki server, let me know; would just take a few minutes.
(*) or that the files are in a server with the files accessible by http (no passwords !!) in a structured way that I can program into Mathematica- that would also work, as I can have Mathematica automatically download from http (no passwords !!)
Quote from: SeeShells on 07/19/2015 01:13 amCould you clarify why in the Wiki page it shows NWPU Prof. Juan Yang's test showed TE012 mode and in this you state mode TM11? Nice piece of work Aero and Dr. Rodal!!!Was the antenna placed to excite a TM (transverse magnetic) mode instead of trying to excite a TE (transverse electric mode)?I'm not sure about the M here, I'm pretty sure about the 11My understanding is that the antenna is identical to rfmwguy except the placementQuote from: aeroSame antenna, 58 mm in the y direction, Ez excitation.(set! antlongx 0) ; direction vector of dipole antenna SI units(set! antlongy 0.058) ; = 58 mm(set! antlongz 0)Many modes nearby, which mode you excite has a lot to do with the antenna placement.
Could you clarify why in the Wiki page it shows NWPU Prof. Juan Yang's test showed TE012 mode and in this you state mode TM11? Nice piece of work Aero and Dr. Rodal!!!
Same antenna, 58 mm in the y direction, Ez excitation.(set! antlongx 0) ; direction vector of dipole antenna SI units(set! antlongy 0.058) ; = 58 mm(set! antlongz 0)
Quote from: saucyjack on 07/20/2015 07:39 pmQuote from: leomillert on 07/20/2015 07:31 pmI have a further suggestion: make all files public-domain. Specially all .ctl files.That way anyone can benefit from the work without any kind of bureaucracy.Public-domain helps the advancement of science.Completely agree - repo should be publicly browseable, with accounts only needed for people committing. And ignore my "Men In Black" bad joke...If you want me to set this up on the wiki server, let me know; would just take a few minutes.tidux will do most heavy computing, so I think it would be better to have it set up on his emdrive.science. (but let's wait to see his opinion).Maybe we could use the wiki to detail the structure of the repository (its folders and files), how someone can commit, etc. What do you think?
tidux will do most heavy computing, so I think it would be better to have it set up on his emdrive.science. (but let's wait to see his opinion).Maybe we could use the wiki to detail the structure of the repository (its folders and files), how someone can commit, etc. What do you think?
Quote from: leomillert on 07/20/2015 07:43 pmQuote from: saucyjack on 07/20/2015 07:39 pmQuote from: leomillert on 07/20/2015 07:31 pmI have a further suggestion: make all files public-domain. Specially all .ctl files.That way anyone can benefit from the work without any kind of bureaucracy.Public-domain helps the advancement of science.Completely agree - repo should be publicly browseable, with accounts only needed for people committing. And ignore my "Men In Black" bad joke...If you want me to set this up on the wiki server, let me know; would just take a few minutes.tidux will do most heavy computing, so I think it would be better to have it set up on his emdrive.science. (but let's wait to see his opinion).Maybe we could use the wiki to detail the structure of the repository (its folders and files), how someone can commit, etc. What do you think?I concur on git, and suggest a clone on two sites is a good idea. Time I learned git.I'd still like to see a file/directory/test-run naming convention. The hierarchy proposed is a good start down that direction, but a uniform naming convention makes sense when there are multiple providers as well as multiple consumers.One question I have - who is expecting whom to hack on control files? How will the modifications be validated / verified against test objectives? I don't feel qualified, at either the lisp, the scheme, the meep script, nor scientific or engineering basis to make ANY sort of valid judgments as to how things should be coded.
annus veritas for the EM Drive