One of the key performance numbers we report in every review is brightness uniformity. And if you read our reviews regularly you know that, depending on the type of projector and other factors, our measurement might vary from around 60% to about 90% in the best examples. The concept of brightness uniformity seems intuitively clear, as a measure of how uniform the brightness is across the screen. It's logical that a projected image should be free of obvious hotspots or unintended dark areas. However, there's a lot of misunderstanding about what brightness uniformity means, and it is easy to take the number too seriously.
Part of the issue is that there is no good way to sum up everything involved in uniformity in a single number--a point we'll come back to later. But there's an even more basic problem interpreting brightness uniformity measurements: there are quite a few different methods to come up with that number, and you can rarely be sure which has been used.
At ProjectorCentral, we follow a widely used standard methodology for measuring brightness uniformity, and apply the same approach every time, so you can meaningfully compare our results from one projector to another. But other approaches are just as valid, and every manufacturer is free to choose. So when our measurements don't match the vendor's specs, it doesn't necessarily mean the specs (or our findings) are wrong. It more likely means they're measuring differently.
Unfortunately, this also means that comparing brightness uniformity specs from two different brands can be a classic apples-to-oranges comparison. There's nothing you can do about that. But understanding how different approaches to measuring brightness uniformity can give different results for the same projector can help you judge the numbers in a more realistic context.
How to Measure Projector Brightness Uniformity?
A good starting point is the methodology ProjectorCentral — and most other reviewers — use for brightness uniformity. The first step is taking measurements with a light meter, using a 100% white test image. ANSI defines nine points to measure, basically picturing a tic-tac-toe grid over the entire image, and measuring at the center point of each rectangle. The following table shows actual readings from one projector. (Note that the light meter reports its results in something called lux, though the unit of measurement is irrelevant here):
Standard Measuring Points For Brightness Uniformity
TOP ROW | |||
MIDDLE ROW | |||
BOTTOM ROW |
We then divide the single lowest reading (minimum) by the single highest (maximum), which in this case is 585/903, or 65%. Looking at it another way, the number means that there is a 35% drop in brightness between the brightest and the least bright points measured.
Other Choices
Here's where things get complicated. If you take the same readings from above and plug them into a different, equally valid, formula, you'll get a different percentage for brightness uniformity.
BenQ, for example, currently uses an approach defined by the International Standards Organization (ISO), which takes the average of the outside corner readings divided by the center reading, even if the center reading isn't the maximum (as is true in this case). Using the same data, that works out to 708/850, or 83%. ViewSonic and Epson also use the same methodology.
Hitachi, on the other hand, uses the minimum of the outside corner readings (585 in this case) divided by the average of all nine readings (755). That translates to 77%.
Some approaches also use a different number of measurement points. The Information Display Measurements Standard (IDMS) version 1.03, for example, mentions using as few as 5 or as many as 25. The most common variation uses the same nine points already mentioned, plus four more near the corners of the image (10% of the way between the corner and midpoint). For the projector we're using as an example here, the additional four readings are:
Corner Measuring Points for Brightness Uniformity
TOP ROW | ||
BOTTOM ROW |
Yet another methodology we found on one vendor's Japan-based website uses the minimum of those outside points (485) divided by the average of the tic-tac-toe block of nine (755), which would give this same projector a 64% rating.
What's Not In a Number
There are even more variations for calculating brightness uniformity, but this is enough to make the point. Depending on which formula you pick, the same set of readings will give you a wide range of brightness uniformity measurements--anywhere between 64% and 83% in this case for the formulas we looked at.
One reason there are so many formulas is because none of them is unarguably better than the others. The version we use tells you the difference between the brightest and dimmest measurements, but is very sensitive to small variations from one unit to the next. The formulas that use averages tend to minimize differences among units, offering more consistent results for all the units of a given model, but they don't report the actual range from brightest to dimmest. The ones that use the additional corner points are based on more information about the image. But since most images draw your attention closer to the center, these methodologies give emphasis to an area of the image you'll tend not to notice unless the differences are extreme.
One fundamental problem that all of these formulas share is that they don't tell you what you ideally need to know, which is not just how much the brightness varies from the brightest to the dimmest sector, but also the gradient--how quickly it changes over distance. This is arguably more important than the brightness uniformity number itself to both image quality and your ability to detect a lack of uniformity.
As the IDMS points out, a 20% change in brightness from one side of the image to the other will be hard to see if the change is gradual going across the screen. But the same 20% drop over one degree in your field of view would be immediately obvious. None of the methodologies for measuring brightness uniformity account for this critical information.
Adding The Missing Piece
Therefore, the only good way to fill in what you need to know is to describe it. That's why our reviews say whether you can see any hotspots or dimmer areas with a solid white test image and, if so, how easy they are to see and where they show. And because visual complexity in an image tends to hide brightness variation, we often specifically add whether you can see the variation with text documents, graphics, or photorealistic images.
Some argue that even when you can't see any variation, low brightness uniformity is a problem for movies, because you won't see the entire scene the way the director intended. As we've shown here, however, the brightness uniformity measurement can vary significantly depending on how you compute it--again, from 64% to 83% in our example. So if you're going to rule out projectors based solely on brightness uniformity, you need to decide not what level is too low for you, but what level is too low using which specific methodology.
That said, the verbal description of uniformity will be the more important information for most people, since it will tell you whether you'll likely notice any variation. If the measurement doesn't seem to match the description, it's because of the gradient. When it changes quickly, we may see hotspots or dim areas even though we measure high uniformity. When it changes slowly, we may not see any variation even though we measure low uniformity. The ideal case, of course, is high brightness uniformity paired with a gradient that changes very slowly. If a review is otherwise positive but the projector measures lower-than-average uniformity, we'll let you know if it's meaningful. Sometimes, when looking at brightness uniformity, you can look the other way.
Yes, it requires a good screen to see, but it it's just in the center of your screen it won't be affected by your screen's non-uniformities.
In my case the projector is quite uniform, so I was looking mostly at screen loss as viewing angle changes, but I think the same argument applies.