a^-2 b^-2 L^2 Q^1 P^1 F^-1 c^-1 (a^-2 + L^-2)^-1 0.15 0.61 *a^-1 b^-2 L^1 Q^1 P^1 F^-1 c^-1 (a^-1 + L^-1)^-2 0.13 0.58 *compared to McCulloch'sa^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 0.46 0.61 <-

Great frobnicating

Yes, Applause for your great effort. --- now, if it could only talk ...I'll try to convert these to conventional nomenclature. Quotea^-2 b^-2 L^2 Q^1 P^1 F^-1 c^-1 (a^-2 + L^-2)^-1 0.15 0.61 *a^-1 b^-2 L^1 Q^1 P^1 F^-1 c^-1 (a^-1 + L^-1)^-2 0.13 0.58 *compared to McCulloch'sa^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 0.46 0.61 <-AIUI - a = w_big, b = w_small and the others are as always, so:a^-2 b^-2 L^2 Q^1 P^1 F^-1 c^-1 (a^-2 + L^-2)^-1 0.15 0.61 *force = QP/fc * (L^2/ab^2)/(a^2 + L^2)and a^-1 b^-2 L^1 Q^1 P^1 F^-1 c^-1 (a^-1 + L^-1)^-2 0.13 0.58 *force = QP/fc * (L/a*b^2)/(1/a + 1/L)^2and McCulloch's formula,a^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 0.46 0.61 <-force = QP/fc * L * (1/a - 1/b) which is not what you wrote.and this is interesting, but I'm still wondering about the effect of L = cavity height, which this doesn't address.

It is even a mystery why NASA Eagleworks chose to test at those particular frequencies, as remarked by Ludwick.

MeasuredFrequency = c/L; therefore L = c / MeasuredFrequency;

QuoteMeasuredFrequency = c/L; therefore L = c / MeasuredFrequency;But I don't buy that. L need only have units of length. That's why I asked you to test L = cavity height .

And by the way, I am not kernosabe. Or kemosabe either .

Try again.AIUI - a = w_big, b = w_small and the others are as always, so:a^-2 b^-2 L^2 Q^1 P^1 F^-1 c^-1 (a^-2 + L^-2)^-1 0.15 0.61 *force = QP/c * (L^2/ab^2)/(a^2 + L^2)

a^-1 b^-2 L^1 Q^1 P^1 F^-1 c^-1 (a^-1 + L^-1)^-2 0.13 0.58 *force = QP/c * (L/a*b^2)/(1/a + 1/L)^2

and McCulloch's formula,a^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 0.46 0.61 <-force = QP/c * L * (1/a - 1/b) which is not what you wrote.

and this is interesting, but I'm still wondering about the effect of L = cavity height, which this doesn't address.QuoteMeasuredFrequency = c/L; therefore L = c / MeasuredFrequency;But I don't buy that. L need only have units of length. That's why I asked you to test L = cavity height .

I'm surprised nobody bites on this nice looking one F = 13100 P/c ab(1/b-1/a)^2orF = 13100 P/c (a-b)^2/(ab)( a^1 b^1 L^0 Q^0 P^1 F^-1 c^-1 |a^-1 - b^-1|^2 ) / mean value (without log) -> 1.16 1.05 1.05 0.71 1.35 2.42 0.34For comparison, McCulloch's (which is great, I don't contest)(a^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 ) / mean value (without log)-> 0.50 1.16 0.81 0.87 0.89 3.95 0.70(as per the order of the seven rows tabulated data)Alright there is a big fudge factor of 13100, that looks like the ballpark of Q values, but note that it doesn't move, it is still 13100 even with Q values going from 5900 to 50000. First and fourth values 1.16 and 0.71 ratio 1.63, no more relative deviation than MiHsC 0.50 and 0.87 ratio 1.74To me this is indicative that this former formula is as good at predicting an effect independent of Q than the later at indicating a linear dependency on Q. Introducing a constant is a lot of information added to fit the data (considering the sparsity of data the risk of overfitting is great) but it also discards two parameters Q and Lambda (or frequency) so is simpler in this respect. What would 13100 stand for ? Let me see... something vaguely around the squared inverse of the fine structure constant for instance ? Do I have an agenda ? Of course I have an agenda. But this isn't numerology.And this can wait until tomorrow.

Quote from: frobnicat on 10/16/2014 09:42 PMI'm surprised nobody bites on this nice looking one F = 13100 P/c ab(1/b-1/a)^2orF = 13100 P/c (a-b)^2/(ab)( a^1 b^1 L^0 Q^0 P^1 F^-1 c^-1 |a^-1 - b^-1|^2 ) / mean value (without log) -> 1.16 1.05 1.05 0.71 1.35 2.42 0.34For comparison, McCulloch's (which is great, I don't contest)(a^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 ) / mean value (without log)-> 0.50 1.16 0.81 0.87 0.89 3.95 0.70(as per the order of the seven rows tabulated data)Alright there is a big fudge factor of 13100, that looks like the ballpark of Q values, but note that it doesn't move, it is still 13100 even with Q values going from 5900 to 50000. First and fourth values 1.16 and 0.71 ratio 1.63, no more relative deviation than MiHsC 0.50 and 0.87 ratio 1.74To me this is indicative that this former formula is as good at predicting an effect independent of Q than the later at indicating a linear dependency on Q. Introducing a constant is a lot of information added to fit the data (considering the sparsity of data the risk of overfitting is great) but it also discards two parameters Q and Lambda (or frequency) so is simpler in this respect. What would 13100 stand for ? Let me see... something vaguely around the squared inverse of the fine structure constant for instance ? Do I have an agenda ? Of course I have an agenda. But this isn't numerology.And this can wait until tomorrow.Well, obviously (a-b)^2/(ab) = ( (a/b -1) + (b/a -1)) or (a-b)^2/(ab) = ( (AR - 1) + (1/AR - 1))which is a symmetrized measure of the distance from unity of the aspect ratio (AR = a / b) between the two diameters of the bases of the truncated cone. (This measure is zero for AR =1 and it goes to Infinity either as AR --> Infinity or as AR --> 0)But how can the photons produce a net thrust force ? and a force exceeding the one of a photon rocket ?

The cat already prescribed MeasuredFrequency = c/L; therefore L = c / MeasuredFrequency;

In MiHsC the inertial mass (mi) is modified as mi=m(1-L/4T) where m is the unmodified mass, L is the Unruh wavelength determined by the acceleration, and T is the Hubble distance snipWhat if the resonant cavity walls acted like a Hubble horizon, especially for Unruh waves of a similar length (as they are in this case)? Then the inertial mass of the photons would increase towards the cavity's wide end, since more Unruh waves would fit there, since mi=m(1-L/2w), where w is the cavity width. The force carried by the photons then increases by this factor as they go from the narrow end (width w_small) towards the wide end (width w_big). The force difference between ends is dF = (PQ/c)*((L/w_big)-(L/w_small)) = (PQ/f)*((1/w_big)-(1/w_small)).

If we consider L to be the RF drive wavelength, then how do we justify talking about the Unruh effect and how it may interact with the photons to cause the excessive thrust? If L is not the RF wavelength then it could be just about anything although it seems that it could be related to cavity dimensions. Ask Prof. M, but I think it doesn't have to be.

And by the way, I am not kernosabe. Or kemosabe either.

Quote from: Rodal on 10/16/2014 10:51 PMIf we consider L to be the RF drive wavelength, then how do we justify talking about the Unruh effect and how it may interact with the photons to cause the excessive thrust? If L is not the RF wavelength then it could be just about anything although it seems that it could be related to cavity dimensions. Ask Prof. M, but I think it doesn't have to be.OK point well taken. You are correct, that is the Unruh wavelength explanation.

Quote from: aero on 10/16/2014 08:03 PMAnd by the way, I am not kernosabe. Or kemosabe either.He knoweth not for whom the bell tolleth.Quote from: Rodal on 10/16/2014 10:51 PMQuote from: Rodal on 10/16/2014 10:51 PMIf we consider L to be the RF drive wavelength, then how do we justify talking about the Unruh effect and how it may interact with the photons to cause the excessive thrust? If L is not the RF wavelength then it could be just about anything although it seems that it could be related to cavity dimensions. Ask Prof. M, but I think it doesn't have to be.OK point well taken. You are correct, that is the Unruh wavelength explanation.In my equationless style, I'm still not on board with the HYPOTHETICAL Unruh wave explanation. Further, since there must be an integral number of these faith based waves in the cavity, and since resonance is THE operative factor, there should have been, from the summa cavea arachis at any rate, much tighter control over the bandwidth of the wavelength sent to the device.What you guys are talking about is not making sense. Not sayin' you're talking nonsense. You're still talking about the copper geometry as having some special refractive index which works at 1.9xxx GHz, using waves which have not been seen.It's rough being me. But somebody has to do it.

OK point well taken. You are correct, that is the Unruh wavelength explanation.Here is the practical problem. There are an infinite number of Unruh wavelengths, and they have an energy spectrum.

....I wrote a small program to generate some exhaustive search on formulas upon the relevant factors then sieving those formulas that fit the available data. This is completely theoretically agnostic but it does check for dimensional consistency (as far as kg m s units are concerned). The search goes on for any product of the terms a b L Q P F c (respectively w_big w_small wavelength=c/Freq Power Thrust Speed_of_light) with all possible whole exponents from -2 to +2 (going through 0) and tries to equal 1 (with the experimental data). It also tries an "extended" term (exterm) that is a combination of 2 homogeneous terms ( that is a b or L ) at any power -2 to +2 through any of the operators sum difference geometrical_average, and then to any power -2 to +2.This does cover the formula by McCulloch but not Shawyer's.Example of understanding the following dumps : McCulloch's formula reads a^0 b^0 L^1 Q^1 P^1 F^-1 c^-1 |a^-1 - b^-1|^1 = 1or said otherwise F = P Q L/c (1/b - 1/a) Note that the difference operator for the extended term is enclosed in absolute value (manual permutation needed to remove it).The sieve goes like that : use the formula on each of the seven data points to generate a value hopefully close to 1. If it is not close to 1 but close to a given value (say 2) for all the data points then we have a constant fudge factor, but if the standard deviation around it is small this is still interesting : a strong relation still holds between the terms in such formula. The mean and deviation are calculated in log space, that is a mean of 0 is a best result (formula gives values around 1) while a mean of -1 or +1 says the formula gives values e (=2.72) times too low or too big.Data input :/// With maxes for rangest_data data_in[Nrec] ={ // w_big w_small lambda Q power force {"Shawyer (2008) a", 1.0 , 16 , 8 , C/2.45 , 5900 , 850 , 16 }, {"Shawyer (2008) b", 1.0 , 28 , 4 , C/2.45 , 45000 , 1000 , 214 }, {"Juan (2012) TE011", 1.0 , 28 , 4 , C/2.5 , 32000 , 1000 , 214 }, {"Juan (2012) TE012", 1.0 , 28 , 4 , C/2.45 , 50000 , 1000 , 315 }, {"Brady et al. (2014) a", 1.0 , 24.75 , 16.5 , C/1.933 , 7320 , 16.9 , 0.0912 }, {"Brady et al. (2014) b", 1.0 , 24.75 , 16.5 , C/1.937 , 18100 , 16.7 , 0.0501 }, {"Brady et al. (2014) c", 1.0 , 24.75 , 16.5 , C/1.88 , 22000 , 2.6 , 0.0554 },};...

I would like to see how the formula parameters behave with this outlier taken out.

QuoteI would like to see how the formula parameters behave with this outlier taken out.Yes. The Brady outlier certainly shows us something but it doesn't show us how the thruster works ideally. It shows how it works when something goes wrong. That's important to know but not useful in the context of discovering the ideal operational model and parameters.For our purposes now of discovering the ideal operational model and parameters, we should avoid outliers when they have been identified. Once the data is evaluated with Brady b" removed we can consider if perhaps Shawyer a" isa less than ideal case as well.