When buying optics the first question asked by customers is “what is the correction value and what assurances can be given?” But, the real point is, how can we know how trustworthy are the certificates and the quality control tests made on the optics?
Old “Classic” tests like the Foucault are still used and they can be considered a reliable method to accurately evaluate primary optics. But, it is normal practice for modern constructors today to use both methods, the “old” tests and the modern interferometer ones, at least during the production. Normally, several instruments are mounted on the optical bench in order to make a sequence of tests and obtain complete and reliable results.
There are testers whose sole business is to review optics received by constructors/distributors either because they have had a complaint from a customer or because they want a second opinion. Certifying organizations are good but unfortunately not many are available and their fees are very high. There are also several good operators/technicians well specialized in this field (the best don’t have any commercial collusion or interest) who can verify the quality of optics. After having selected a reliable tester among the number of companies and reviewers available (and only considering the professionals with a good reputation) it is still crucial to be in agreement with the value of the figures declared on the optical certificates.
It is essential that tests are made declaring clearly all the conditions applied, such as the instruments and software used, methods of test and so on…if this is not done, the test cannot easily be compared to others. This should be a basic principle for any scientific test literature, where everybody needs to use the same language and follow the same unequivocal procedures. Unfortunately, this is not always applied in the amateur world, where it is normal to use different test benches or optic references. If we were to send a Newton mirror to three different testers the meaning, we would expect the results to be comparable. In reality, they might not be if –for instance– each of them used a different type of test bench or a different reduction.Certainly two Hartmann tests made with similar data reduction algorithm and considering similar points per pattern and numbers should give very similar final results. We also believe that it is rather doubtful when people state they can estimate differences of 1/10 lambda at the eyepiece or on the field, whatever their experience, reputation or good-faith. The significance of both the “old” Focault and the modern interferometer tests could be compromised if, for instance, the number of test runs (how many times the test has been done) is not confirmed or the size of the Focault’ slit or the number of zones being read are not specified. Also, in the case of the interferometer, if the source lambda and the number of lecture points were not detailed, again the test results could be void. Both tests should guarantee the setup is correctly aligned and the beamsplitter or Ross lens used are of good quality (a beamsplitter is NOT a neutral element for a converging beam and the Ross lens can cause gross misjudgements, in fact on distances of only 50mm it can have a few percentage points of tolerance: therefore be careful then when indicating the curvature radius of the best fit). We should note here that the Foucault normally works with a continuous source “centred” at about 560nm while interferometers generally work with about 660nm: this means that a 17% variation is normal but, not only, it should definitely be noticeable. Having said that we shouldn’t forget we are comparing apples to pears…
We will show you here an example of optics testing although, for privacy reasons, we cancelled any personal reference. The tests were made on the same mirror and the results offer us a very clear picture of the modus operandi (MO) of many non-professional testers. The optical tester must be a professional with a deep knowledge of the subject, a regular optical user that could possibly help the amateurs making sure the optic fitted onto their telescope is according to standard. When there is not a precise declaration of the methods used to carry out the tests, nobody except a certified testing organisation can be considered reliable. This is the difference between an amateur and a professional. The method used by Oldham Optical is simple and well explained and it guarantees a satisfactory quality standard, it is up to the testers to declare their methods and norms. We decided to have our optics (a fast mirror in Suprax made by Oldham Optical) tested by a professional laboratory normally working for professional observatories and supplying test results certificates.
|Oldham||550||0,08||0,025||0,98||Double Pass Null in autocollimation|
|“A”||532||0,449||0,077||0,79||Interferometer with Ross lens|
|Professional Tester||632||0,10||0,016||0,989||Interferometer in autocollimation|
General comments: we notice immediately that the tests made by the testers A and B are incongruent and – most importantly – their Strehl values (a global index describing the total quality of the optical surface) are different to the one declared by the constructor. Their values are actually wrong while the Oldham and the Professional tester values correspond (see why below). In consideration of the above, we asked Esmond Reid, an English professional optician, to carry out for us the test # V, a Double Pass Null test in autocollimation. Considering the similarity of results between the Oldham test and our test # IV, we requested him to reconfirm the correction Pto V: PtoV: 0.10 on the wavefrontnm: 632 Now we are going to consider one by one all the tests done (excluding the one carried out by Oldham Optical which is our reference) in order to be able to understand the meaning of the figures and the way they have been obtained (please consider the different wave lengths used during the experiments).
Test # II (A) – comments: This test was done by using a Ross lens which introduced, as a consequence, coma to the system. In fact, because the Ross lens is inclined to the mirror to be tested, all the values (PtoV, RMS and Strehl) decreased remarkably. When the Tester was asked to explain this he said that no coma was detected during the test. But, what else could the “S” shaped images on the interferogram be? The same asymmetry is visible on the « 3D graphic » image with a shape of a focal surface twist while on the « PSF map» image and “PSF surface” it looks like an irregular pattern. Although a low level roughness is visible in the centre of the mirror, the tester could not measure it because of a poor set up (i.e. the Ross lens inclination). Knowing that the mirror belonged to a Newton telescope the tester should have asked the value of its obstruction in order to cover the central area and set the Ross lens correctly. If he would have done so, the result would have been different. We have asked then few questions: 1) If the asymmetry is not Coma, what is it? 2) Can you confirm the test method (Single Pass with Ross lens, Double pass or other)? A sketch showing the layout and the measurement is enough. We did not get a reply. The test with the Ross lens is rarely carried out nowadays, in fact the lens is made especially according to the focal length of the optics to be tested and also, when it comes to check out the tolerances of the actual lens, the results are worrying: a 5% error (on 50mm distances) generating a 20% drop in a 0.99 Strehl value (so, even a perfect primary mirror would be assessed as only just acceptable). In any case, this test method is not recommended to test fast optics (what a surprise!). Test # III (B) – comments: This tester has carried out the test at the centre of the curvature using a software program to assess the wavefront. This is a “single pass” test. Having assessed the obstruction correctly, the tester confirmed the good quality of the optics but, again, he only provided a small amount of information about the method and the setting used to get the results. Let’s consider the Pto value for instance: it should be obtained with several readings and their sequence is also very important. If details like this are not declared then the test results cannot be professionally compared to others and the test can only have an amateur value. Test # IV (Professional Tester) – comments: This professional operator used metric standard equipment and worked in a controlled environment. The test aimed to assess the optical surface but, because it has been done in AUTOCOLLIMATION (double pass) and therefore the light is reflected twice by the mirror, the values measured (PtoV and RMS) should be halved to be compared to those m obtained when the mirror is used to observe the sky (single pass). For this reason the values for the sky observation (single pass) are PtoV = 0,1 wave front and RMS = 0,16 wave front and the tester states that the Strehl ratio does not change whatever the measurement, either on the surface or on the return wave front. We made a few ray tracings on this mirror with Zemax and we established that it is basically impossible to have a 0.032 RMS wave front and a 0.989 Strehl ratio. For your information a 0.032 RMS wave front equates to a 0.959 Strehl ratio.
Test # V – comments: There is not much to say about this test. It has been carried out in autocollimation and it gave the same results as the test # IV. When the middle area is covered by the central obstruction, it gives a mirror of 1/10 wave front. The method of test # IV is the same as test # V the only difference being that for test # IV an interferometer is used to assess the quality of the image. The tester (V) states that the surface is high quality and low roughness. An expert optical operator states the following: “Nobody in the trade would ever dream of giving explicit values better than 1/10 lambda PtoV with the notorious interferometer test: in fact the 1/10 value is recognised to be the one beyond which any significance and reproducibility drop. Also, it should be considered that the interferometer configuration hugely affects the residual spherical value. If the beamsplitter is any less than excellent (and even the best systems normally use flat beamsplitters at less than ½ lambda on the wave front at nm 633) only the ACP system (Absolutely Common Path) can eliminate any spherical contribution. The point is that it seems that with a simple system with a laser LED able to produce some interference fringes it is possible to have the final word about testing” A quick read of the Malacara and it is clear how tests have all, somehow, the tendency to discredit various organizations whatever their importance or size; and the interferogram as well, it is interpreted by points (the software identifies and schematizes the fringes with an algorithm which assembles them back by discrete points). In any case, it is NOT a final system but the most reproducible available today and, except for the microstructures, the one giving a good reading for a good mirror”. Having said that, how is it possible that an interferometric test can totally change the results of an astronomic mirror? How could there be some flattering results (we will not consider those made in bad faith, because they are illegal and indictable)? It is dead simple: we followed the metric laboratory test and changed only a few parameters to obtain very different results (but it is possible to do much more…). Here are four tests made by a professional optical technician with a setup regulated to a standard. We didn’t want to go any further but just give you a small demonstration: TEST 1 – full aperture without obstruction:PtoV = 0.288RMS = 0.039Strehl= 0.942 TEST 2 – aperture 95% without obstruction:PtoV = 0.253RMS = 0.035Strehl= 0.953 TEST 3 – aperture 100% , obstruction 120mm:PtoV = 0.099RMS = 0.017Strehl= 0.989 TEST 4 – aperture 95%, obstruction 120mm:PtoV = 0.092RMS = 0.016Strehl= 0.990 The tests have been carried out on the surface but in autocollimation. Please note how easy it is to transform the numbers truthfully and to hide the method test to the customer; in fact nowhere a unification protocol is mentioned. When using electronic/optical instruments to make a control or certification test it is obvious that it has to be carried out by professional and experienced people, using the right kind of equipment and following a standard procedure as required by the industrial sector. From the Oldham Optical web site we are reporting Norman Oldham opinion, who has 20 years’ experience in optics manufacture, which is very similar to many other reputed manufacturers. « The results of the tests carried out by Oldham Optical are pretty similar to those of another Double Pass Null tester. We believe that the Double Pass Null Tests done by different testers give similar and consistent results, always around 1/30 lambda or better, while Interferometer tests done by different testers give different and inconsistent results ». We believe that Interferomenters are reliable when the accuracy required is higher than the wave length but, for astronomical mirrors where the accuracy required is just a little fraction of wave length, the results very much depend on the quality of the equipment and the ability of the operator. However, the precision is always limited by the extra optics used in the interferometer and, generally speaking, the results are always worse than those obtained with Double Pass Null tests. When the Interferomenter test results suggest the presence of a big bump in the centre of the mirror, this is likely to be due to the settings or the use of the Interferometer itself. It is not unusual that some interferometer tests are simply made up. On the other hand, when a mirror is more than ¼ lambda PtoV and the Interferomenter tests results seem to show problems at the edges and asymmetry, again, the reasons might be the problems mentioned above. We believe that the Double Pass Null test is accurate and gives reliable and consistent results even when it is carried out by different testers and we think that most mirror manufacturers agree with us (see Technical Topics for a complete explanation). However, to be more accurate, we have to say that among the non-Interferometer tests the HARTMANN is the most reliable and can give the same results every time. The LYOT can give very small readings but can omit some errors, even bad ones; while the FOUCAULT is the most versatile with most scales available but also more difficult to interpret. All these different test methods are not competing with each other but they have each a best application (the interferometer test is generally used for finished surfaces, the Ronchi one is very good for a first configuration, the Lyot has been fundamental for the first interferential quartz prisms…). It is important anyway that the results are comparable, except for non-standard readings (during the Foucault readings it is also normal to disregards the very inconsistent results). What really matters, after all, is to understand how a tester has obtained the results and verify with the practical use that the optics are performing accordingly. What about our tests then? They are useless! Simply because, with the exception of the constructor who uses a declared method supported by technical literature, no professional will ever dream of mixing different reports omitting essential relevant information and tests that do not conform to the optical metrology standard. This is obviously not only NortheK’s point of view but also the position of many professionals in the optical (not only astronomic) sector. For this reason, if the Double Null Pass test is not satisfying then nothing else could be, and those who disagree simply do not know what they are talking about. Do you think you could notice a difference between an optic corrected at ¼ PtoV and one at 1/8 PtoV ? We are sure you could not and we challenge you to demonstrate it. The Strehl ratio is a handy and intuitive parameter which gathers in a single figure various and meaningful parameters describing the global quality. This is an easy way to assess amateurs’ optics. Because the Strehl ratio cannot be measured directly, but only calculated by means of various algorythms, the variation of results can be quite important depending on the way this process is done, especially when the algorithms as well as procedure used to acquire and reduce the data are not made clear. The key is the operator who needs to be skilled, trustworthy and available to support and to answer any question from the customer. The RMS value also gives a good general idea of the optical surface. Other tests, i.e. the one assessing the roughness of the surface, could be easily altered as the value could be increased or decreased with simple operations at the optical bench. Also, it is curious how -although made with the same methods and products- some mass producers seem to always have highly polished surfaces while others seems to always be named as producers of rough surfaces. To clarify, the PtoV is a meaningful value when it comes to judge small amateur optics, but serious manufacturers know that on its own it is not sufficient and it could hardly certify a decent optics. We do not want to get into further detail and we leave it to the reader who has the competence and the inclination to do it… Note: all the documents relating to the above mentioned tests are held at NortheK’s. We decided to keep them undisclosed not to breach the privacy of the amateur testers. Obviously we can prove the test results are truthful also in court if necessary. Together with these documents we also hold the correspondence proving the procedure we followed to have the tests done, including delivery notes, recipients and so on…
NortheK all right reserved ©