Developing Better Tools for LCD Quality Control
During my year at Sharp Devices Europe, I quickly became known as as a computing odd-jobs man. One such job was revamping a tool that the team there had been using to measure the uniformity of their LCD modules.
Large LCD panels are backlit with arrays of hundreds of SMD LEDs, mounted in a grid-like pattern at the rear of the panel. The light from these LEDs passes through a number of optical sheets, which diffuse and polarize the light. In an ideal world, this would result in a perfectly uniform light across the entire screen. However, these optical sheets can encounter mechanical problems, and this will manifest itself in defects appearing on displayed images with large areas of flat colour.
Example of an LCD screen with non-uniform light distribution (source)
Previously, the team had been using a specialised piece of equipment called a uniformity camera. These cameras were actually difficult to use and expensive to maintain. When you think about it, this makes no sense! The image above contains all the necessary information about the uniformity of the picture! A ‘uniformity camera’ is really a regular camera and piece of software.
Much of the testing at Sharp is done with highly specialised in-house software, and the LCD team found that they needed a faster, easier way to test the uniformity of their LCD screens. Luckily, work had already begun on a project to develop software that would improve the quality of this uniformity data, and the speed at which it could be acquired.
I inherited a glob of Python code from a previous employee, who had found it a little too ambitious, and was asked to bring it back into working order. At the time, it was completely uncommented and undocumented, oddly structured, and extremely slow – taking around 100 seconds to analyse a single image. It was also a command-line only tool – which is fine for whoever wrote it, but not for the less-technical engineers who need to be able to use it.
I rewrote and refactored almost all of the code, and performed a number of optimisations – most notably, vectorizing all the functions, which had previously used explicit loops. All-in-all, I was able to get the time it took from ~100 seconds to much less than 1.
I was also able to add a number of convenience features. For example, the ability for the program to automatically read the proprietary .cr2 raw data files generated by our test camera and convert them into .tiff files, which are compatible with most image analysis libraries. I gave the program the ability to automatically extract the exif data from the image files and allow it to be used in other parts of the program – some of the calculations that the program performs require this data, and previously it had to be entered manually. I added the ability for users to define their own ‘calibrations’ for certain cameras, brightnesses, focal lengths and so forth, and save them into a self contained ‘calibration file’ for later use. Finally, I designed a simple GUI, based on the Enaml library.
Finally, I bundled everything into a self-contained Windows executable, complete with a graphical installer (previously, it had been a command-line script with somewhat labyrinthine dependencies). Unfortunately, my industrial placement ended before the next round of testing began, so I was never able to see it in action!
Next time you are looking at an LCD screen (perhaps you are looking at one right now?), you should consider the hundreds or thousands of hours of effort that go into the design of the module, lighting and optics, in order to make the light as uniform as possible. And the software that runs everything behind the scenes, of course!
Last updated July 19, 2017.