In the last article, I showed what the real world looks like through the Magic Leap One (ML1). For this article, am going to share some pictures I took through the ML1 optics displaying test patterns.
Above left is a crop of the original test pattern scaled by 200% compared to a picture of the same portion of the test pattern taken through the ML1 (for reference, the whole test pattern linked to here). This test pattern with the various features is a tough but fair way to check out different image quality aspects. The single and two-pixel wide features are meant to test the resolution of the display. A hole was left in the larger pattern to allow an iPhone 6s Plus displaying part of the test pattern to show through as a reference. There is additional information on how the picture was shot in the appendix at the end of this article.
Most of the Magic Leap demos have colorful but smaller objects which work as both “eye candy” as well as serving to hide the lack of color uniformity across the FOV. The use of faces with skin tones in the test pattern is there because people are more sensitive color of skin. The test pattern has large solid white objects across the FOV to identify any color shifting.
I used the Helio web browser to display the images, and some of the image resolution issues could be due to the way the ML1’s Helio browser scales images in the 3-D space. I tried capturing the test patterns and displaying them in the ML1 gallery, and the results were considerably worse. I viewed the same test pattern with Hololens with its browser, and it is noticeably sharper than the ML1 although the Hololens is a bit “soft” as well. It would be good at some time to go back and separate the browser scaling issues from the optics issues, but then again, this is the way the ML1 as a whole normally displays 2-D images.

I have looked at detailed content on the two different ML1s, and none of it is sharp, so I think that these images fairly represent the image quality of the ML1. Even if the scaling engine on the ML1 were poor, the degree of flare/glow and chroma aberrations which are caused by optics would suggest the resolution of the ML1 optics is low.
I only tested the “far focus” (beyond ~36 inches) mode as it would have been very difficult to test the near depth plane focus mode. I could sense that the near focus plane was sharper than the far focus plane and as the diagrams from the Magic Leap patent applications suggest (see right). The far focus planes go through the near focus plane exit gratings to get to the eye which might be part of the problem. I would have liked to have tested the near focus plane as well, but there was no way to scale the test pattern that would work, nor was there a way I knew of to keep the headset in near focus “mode.”
ML1’s Image Issues
The pictures below were taken through the ML1’s right eye optics with my annotations in red, green, and orange. You may want to click on the images to see detail. To be fair, closeup camera images will show flaws that may not be noticed by the casual observer. Generally, projected images look worse than direct view displays because of imperfections in the optics, but in the case of the ML1, the diffractive waveguides appear to limit the resolution.
While there are differences between how the human eye and a camera “sees” an image, it gives a reasonably good representation of what the eye sees. A camera is “objective/absolute” whereas the human visual system is more subjective/adaptive and judges things like brightness and color relative to a local area and makes the background in the picture seem darker than it does “live.” The artifacts and issues shown in the photo are visible to the human eye.
Overall the color balance is good in the center of the image. You will notice a color shift in the skin tones of the two faces in the test pattern, but it is not terrible until you get to the outer 15% of the image where there is significant color shifting to blue and blue-green as can be seen in the photo.
Issues with the ML1 Image:
- Soft/Blurry image – The softness of the image can be seen in the text and the 1 and 2-pixel wide test pattern. While some of this softness is likely due to the scaling algorithms, the image is overall soft. While the claimed resolution of the ML1 imager is 1280 by 960 pixels, the effective resolution is about half that in both directions or closer to 640 by 480 pixels in the center of the FOV, and lower in the periphery.
- Waveguide glow (out of focus reflections) – While the waveguide glow is most noticeable around large bright objects (such as the circles and squares in the test pattern, it also affects lowering the contrast and thus effective resolution of details like the text.
Color ripples across the FOV that move with head and eye movement (see also the photo of the white and black rectangles on the right) – The color consistency across the FOV is relatively poor which is common with all the diffractive waveguides I have seen to date.- Blue-Green and blue color shifts on the left and right sides of the image – Once again this is a common problem with diffractive waveguides. On the right eye, the left side is red deficient, and the right side is lacking green and red (bluish) and vice versa for the left eye.
- Brightness falls off as you move away from the center of the FOV. This problem is common with most projection-based display devices. The ML1 seems better than Hololens in this aspect
- Chroma aberrations (lateral color separation) – This can be seen in the edges of the circle in the photograph with a red tint edge on one side and a blue-green tint on the opposite side. This effect can be better seen in the zoomed closeup later in the “More On Resolution” section below.
- Binocular Overlap Disparity – This is a common problem with stereo headsets and a small FOV. With the image filling the FOV, each eye gets approximately the same image but shifted. When you look at the image with both eyes, the left eyes image cuts off on the right side and vice versa for the right eye’s image. What one sees is a dark region on each side. For the image that appears to be about 4 feet away, I have shown the visual cut-off point with an orange dashed line. Solving this problem would further reduce the FOV as they would have to hold a significant percentage of the FOV in “reserve” for addressing this issue.
- Cropping the FOV to support inner-pupil-distance (IPD) adjustment – Base on how the test pattern is viewed at full size, I think they are reserving about 130 of the pixels horizontally (about 10% of the 1280 horizontal pixels) for their electronic IPD adjustment. {warning, this is a very indirect measurement and could be inaccurate as I was not able to control the source image} Hololens appears to be reserving a similar number of pixels for the same function.
- Diffractive Waveguide is catching light from the real world resulting in color “flares” – something I noted in the prior article.
More On Resolution
In order to see more detail, the picture on the left has the camera zoomed in by over 2X to give more than five camera samples per ML1 pixel (click in the picture to see it at full resolution). The part where the iPhone shows through has been copied and moved to line up with the text in the ML1’s image. The iPhone’s image shows what the text should look like if the ML1 could resolve the image.
Text on the ML1 by any is noticeably soft. The ML1 is less sharp than Hololens even and less sharp than Lumus’s waveguides by an even wider margin. The one pixel wide dots and 45-degree lines are barely visible.
Conclusions
I was expecting the color uniformity problems and image flare/glow based on my experiences with other diffractive waveguides. The color in the center of the FOV is reasonably good on the ML1.
But I just can’t get past the soft/blurry text. I first noticed this with the text in Dr. G’s Invaders teaser (on the right) which is why I set out to get my own test pattern on the ML1. I don’t know yet how much this softness is caused by the dual focus planes but I suspect it is a reason why the ML1 is blurrier than Hololens.
As some time in the future, I hope to be able to bypass the 3-D scaling to directly drive the display to better isolate the optical from any scaling issue. I would also be curious if I could lock the device into “close focus plane mode” and test that mode independently. As I was currently driving the ML1, very soon after I take my eye away from the ML1, it switches back into far focus plane mode (which is why I did not run a test in the near focus plane mode). If someone wants to help with this effort, please leave a note in the comments or write to info@kguttag.com.
Appendix: The Setup for Taking the Test Pattern Pictures
I used an Olympus OM-D E-M10 Mark III mirrorless camera. I specifically chose this camera for taking pictures of headsets due to its size and functionality. On this camera, distance from the center of the lens to the bottom of the camera is less than the distance from my eye’s pupil to the side of my head so that it will fit inside a rigid headset with the lens centered were my pupil would have been. In portrait mode, it has 3456 pixels wide by 4608 pixels tall which is over two camera samples per pixel of the ML1’s spec’ed 1280 by 960-pixel LCOS device. The camera has 5-axis optical image stabilization which greatly helps in taking hand-held shots which I was required to do.
The “far focus” of the ML1 is set to ~5 feet (~1.5 meters). I put a test pattern on this website and use the ML1’s Helio browser to bring up the image. I then moved the ML1 headset back and forth until the test pattern filled the view which occurred when the virtual image was about 4 feet away.
The picture on the right shows the setup of the iPhone when viewed from an angle. It gives you an idea of the location of the virtual image relative to the phone. This picture was taken by the camera through the ML1, and only the red annotations were added later.
From other experiments, I knew the “far focus” of the ML1 is about 5 feet. I set up an iPhone 6s Plus in a “hole” in the test pattern put there to view the phone. To have the phone in focus at the same time as the virtual image, I set the phone behind and adjust the phone’s location until both the phone and the ML1 images were in focus as seen by the camera. I then scaled the iPhone’s display to have the text the same size as the displayed on the ML1 as seen by the camera. In this way, I could show what the text in high-resolution test pattern should have looked like through the camera, and it verifies that the camera was capable of resolving single pixels in the test pattern.
The iPhone’s brightness set to 450 cd/m2 (daytime full-brightness) so that it could be seen after being reduced by 85% as seen through the ML1, so the net was only about 70 cd/m2. I took the picture in camera RAW and then white balanced based on the white in the center of the ML1’s image which makes the iPhone’s display look a bit shifted toward green. The picture was shot at 1/25th of a second to average out any field sequential effects.

For reference, the image on the left is a pass-through frame capture taken by the ML1 from about the same place. With pass-through, the ML1’s camera and the exposure of the test pattern can be set independently. In this image, the ML1’s camera appears to have focused on the far background which puts the iPhone out of focus, but you can get a feeling for how bright the iPhone was set.
Interestingly, I saw some different scaling artifacts in this pass-through image than I saw in the image in the camera; in particular, thin black lines on a white background tend to disappear.
The pass through’s image is biased to favor white over black. Looking at the 1 pixel wide features under the “Arial 16 point,” the black 1-pixel dots and lines are all but lost, and even the two pixel wide ones to their left are almost gone.
Acknowledgment
I would like to thank Ron Padzensky for reviewing and making corrections to this article.


