@Star-Drive :
I'm still asking about the huge discrepancy between the expected linear displacement of 32.3µm (assuming only flexure bearings restoring torque) in response to a calibration pulse of 29.1µm and the actual readings on chart vertical scale of a linear displacement between 1µm to 2.5µm for the calibration pulses.
There is an inconsistency of an order of magnitude between values as expected from flexure stiffness and as recorded on the display. Is it acknowledged or investigated by the team at Eagleworks ?
I'm not objecting that the thrusts measurements would not be proportional (ie for twice the deviation of cal. pulse => twice 29.1µN) but since the fixed ratio of µm displacement per µN of thrust is at the heart of the experiment, such discrepancy can only weaken the case for the charts published so far. This needs to be clarified anyhow. It could be a problem of calibration of the Philtec D63 gain, or a biased scaling factor between the analog outputs of the D63 and the final rendering of vertical scale on display...
Frobnicat & Crew:
Dr. White and his NASA interns are the folks who performed the original force calibration work on this torque pendulum, so your questions might be bettered answered by him. However, it's my understanding that the torque pendulum's actual micron displacement observed for each test run is dependent on its specific total active mass load, balance weights and all their locations on the torque pendulum arm for the test run in question. So as long as we reference the near constant calibration force from our electrostatic fin calibration system before and after each test run, and then use that specific displacement yardstick of the moment as the true measure of the test article's generated forces, it doesn't matter what the actual micron displacement turns out to be for each data run. And that has been what we've used to date report our generated forces. If there is a major problem with that approach please let us know.
Best, Paul M.
Well, the pendulum is basically a device that, ideally, converts force into displacement. What is measured is displacement D=Cst*F. The electrostatic calibration system looks like a robust way to achieve a stable reference force Fcal of 29.1µN. So there is no question that Dthrust=Cst*Fthrust together with Dcal=Cst*Fcal yields Fthrust=Fcal*Dthrust/Dcal where Cst cancels. That's a good point for the design, there is no need to know Cst precisely. But not needing to know precisely is one thing, having an order of magnitude discrepancy in the absolute value of Cst is another. Cst is at the core of the conversion of µN (what we want to know) into µm (what we measure) : at the very least there is a central aspect of the balance that seems very poorly characterised.
Add to that the fact that the whole apparatus is slightly tilted. With the actual vertical scale charts readings (that are in contradiction with
both the known flexure bearing stiffness and the natural oscillation period of ~4.5s which is clearly visible on some charts) there is no way to reconstruct this tilt from the data. You seem to imply this tilt is quite low, nevertheless it does play a significant role in the rest equilibrium point of the balance since a perfectly horizontal setting wouldn't allow to have a stable rest position (which makes perfect sense since there is no angular tuning at the axis and flexure bearings rest will drift thermally and with loads). So gravity plays a role. So thermal displacements in centre of mass of a part relative to fixation point can have a significant impact on the rest position, recording as sustained displacements at the LDS. Frankly, from the shapes of the responses to the tests I don't really believe that thermal displacements alone can explain the whole response, but others may be more finicky about that, and to clearly assess those aspects needs a correct characterization of the Cst between µN thrusts and µm readings. For instance if vertical scale is showing 1µm when really it should show 10µm then one would have to explain 300g moving only 0.1mm instead of 300g moving 1mm. The correction (if it is indeed needed) would actually make it harder to explain effects from thermal displacements.
A change of stiffness at the level of flexure bearings due to axial load conditions can't explain the 4.5s period in the charts. I can't find Riverhawks charts indicating how stiffness would change with axial load, but explaining the 1µm reading for 29.1µN cal. as due to an added stiffness (30 times more) depending on axial load would be in direct contradiction with the dynamics of the chart and using moment of inertia from mass distribution (even if it's uncertain up to 50%).
This very apparent discrepancy could be due to a simple scaling factor in display, it could also indicate a problem in the operating conditions of the LDS, with an LDS working not around 500µm but nearer the peak, at reduced sensitivity and reduced linearity.
So while I wouldn't qualify the problem as major as far as Fthrust=Fcal*Dthrust/Dcal is concerned, it would still appear as a
manifest and serious consistency problem for anyone looking in some depth at the data and requiring a proper characterization of the system. It could be also indicative that LDS is operating at less than ideal conditions and that alone would make it worth further inquiry.
Side note : I'm only an anonymous contributor trying to understand what is going on, I do have a background in mechanical engineering, including metrology, but can't claim this is my professional activity (neither in aerospace). I do believe my arguments are correct and worth of consideration, but this does not preclude a blunder somewhere. I understand with limited time and resources the team at Eagleworks has to weigh the priorities. Anyhow, thank you for taking time reading and answering my concerns.