One of the key performance numbers we report in every review is brightness uniformity. And if you read our reviews regularly you know that, depending on the type of projector and other factors, our measurement might vary from around 60% to about 90% in the best examples. The concept of brightness uniformity seems intuitively clear, as a measure of how uniform the brightness is across the screen. It's logical that a projected image should be free of obvious hotspots or unintended dark areas. However, there's a lot of misunderstanding about what brightness uniformity means, and it is easy to take the number too seriously.

Part of the issue is that there is no good way to sum up everything involved in uniformity in a single number--a point we'll come back to later. But there's an even more basic problem interpreting brightness uniformity measurements: there are quite a few different methods to come up with that number, and you can rarely be sure which has been used.

Brightness Uniformity
Poor brightness uniformity, as illustrated in this graphic (courtesy of BenQ), can result in noticeable fading at one or more corners or an obvious hotspot at the center or elsewhere.

At ProjectorCentral, we follow a widely used standard methodology for measuring brightness uniformity, and apply the same approach every time, so you can meaningfully compare our results from one projector to another. But other approaches are just as valid, and every manufacturer is free to choose. So when our measurements don't match the vendor's specs, it doesn't necessarily mean the specs (or our findings) are wrong. It more likely means they're measuring differently.

Unfortunately, this also means that comparing brightness uniformity specs from two different brands can be a classic apples-to-oranges comparison. There's nothing you can do about that. But understanding how different approaches to measuring brightness uniformity can give different results for the same projector can help you judge the numbers in a more realistic context.

How to Measure Projector Brightness Uniformity?

ANSI Lumen 9 Point PatternA good starting point is the methodology ProjectorCentral — and most other reviewers — use for brightness uniformity. The first step is taking measurements with a light meter, using a 100% white test image. ANSI defines nine points to measure, basically picturing a tic-tac-toe grid over the entire image, and measuring at the center point of each rectangle. The following table shows actual readings from one projector. (Note that the light meter reports its results in something called lux, though the unit of measurement is irrelevant here):

Sample Readings For The Nine
Standard Measuring Points For Brightness Uniformity


We then divide the single lowest reading (minimum) by the single highest (maximum), which in this case is 585/903, or 65%. Looking at it another way, the number means that there is a 35% drop in brightness between the brightest and the least bright points measured.

Other Choices

Here's where things get complicated. If you take the same readings from above and plug them into a different, equally valid, formula, you'll get a different percentage for brightness uniformity.

BenQ, for example, currently uses an approach defined by the International Standards Organization (ISO), which takes the average of the outside corner readings divided by the center reading, even if the center reading isn't the maximum (as is true in this case). Using the same data, that works out to 708/850, or 83%. ViewSonic and Epson also use the same methodology.

Hitachi, on the other hand, uses the minimum of the outside corner readings (585 in this case) divided by the average of all nine readings (755). That translates to 77%.

Some approaches also use a different number of measurement points. The Information Display Measurements Standard (IDMS) version 1.03, for example, mentions using as few as 5 or as many as 25. The most common variation uses the same nine points already mentioned, plus four more near the corners of the image (10% of the way between the corner and midpoint). For the projector we're using as an example here, the additional four readings are:

Additional Sample Readings For The Four
Corner Measuring Points for Brightness Uniformity


Yet another methodology we found on one vendor's Japan-based website uses the minimum of those outside points (485) divided by the average of the tic-tac-toe block of nine (755), which would give this same projector a 64% rating.

What's Not In a Number

There are even more variations for calculating brightness uniformity, but this is enough to make the point. Depending on which formula you pick, the same set of readings will give you a wide range of brightness uniformity measurements--anywhere between 64% and 83% in this case for the formulas we looked at.

One reason there are so many formulas is because none of them is unarguably better than the others. The version we use tells you the difference between the brightest and dimmest measurements, but is very sensitive to small variations from one unit to the next. The formulas that use averages tend to minimize differences among units, offering more consistent results for all the units of a given model, but they don't report the actual range from brightest to dimmest. The ones that use the additional corner points are based on more information about the image. But since most images draw your attention closer to the center, these methodologies give emphasis to an area of the image you'll tend not to notice unless the differences are extreme.

One fundamental problem that all of these formulas share is that they don't tell you what you ideally need to know, which is not just how much the brightness varies from the brightest to the dimmest sector, but also the gradient--how quickly it changes over distance. This is arguably more important than the brightness uniformity number itself to both image quality and your ability to detect a lack of uniformity.

As the IDMS points out, a 20% change in brightness from one side of the image to the other will be hard to see if the change is gradual going across the screen. But the same 20% drop over one degree in your field of view would be immediately obvious. None of the methodologies for measuring brightness uniformity account for this critical information.

Adding The Missing Piece

Therefore, the only good way to fill in what you need to know is to describe it. That's why our reviews say whether you can see any hotspots or dimmer areas with a solid white test image and, if so, how easy they are to see and where they show. And because visual complexity in an image tends to hide brightness variation, we often specifically add whether you can see the variation with text documents, graphics, or photorealistic images.

Some argue that even when you can't see any variation, low brightness uniformity is a problem for movies, because you won't see the entire scene the way the director intended. As we've shown here, however, the brightness uniformity measurement can vary significantly depending on how you compute it--again, from 64% to 83% in our example. So if you're going to rule out projectors based solely on brightness uniformity, you need to decide not what level is too low for you, but what level is too low using which specific methodology.

That said, the verbal description of uniformity will be the more important information for most people, since it will tell you whether you'll likely notice any variation. If the measurement doesn't seem to match the description, it's because of the gradient. When it changes quickly, we may see hotspots or dim areas even though we measure high uniformity. When it changes slowly, we may not see any variation even though we measure low uniformity. The ideal case, of course, is high brightness uniformity paired with a gradient that changes very slowly. If a review is otherwise positive but the projector measures lower-than-average uniformity, we'll let you know if it's meaningful. Sometimes, when looking at brightness uniformity, you can look the other way.

Comments (7) Post a Comment
Norm Posted Oct 17, 2018 11:33 PM PST
Couldn't you just post a couple of pictures? One picture unadulterated, and one pointing out what the text is explaining ("here's a hot spot", "the left side from 1/3 up to the corner has a sharp gradation", etc.) with arrows and circles and stuff.

Yes, it requires a good screen to see, but it it's just in the center of your screen it won't be affected by your screen's non-uniformities.
Rob Sabin, editor Posted Oct 17, 2018 11:40 PM PST
Actually, except in extreme cases we're usually talking about very subtle gradations between areas of the screen, and I don't know that photography could accurately and fairly translate what the eye is seeing and what the effect is on viewing. I think David's caution should be heeded about not overreacting to what we'd call normal variations across the screen. If you go back in our review archive, I'd say it's probably rare to find any projector that proves to be not recommendable specifically because of severe issues with screen uniformity.
Manfred Posted Oct 18, 2018 1:18 AM PST
It's pretty interesting the different standards in play and can def be confusing when looking at advertised specs vs reviewed specs. In future reviews will you start putting the brightness tables in the review? This data would allow measuring via the different standards.
Rob Sabin, editor Posted Oct 18, 2018 6:19 AM PST
Manfred, this is an interesting idea in that it provides the data for readers to calculate their own number based on different formulas and it does show a visual representation of where there might be variation on the screen. But again, I worry that this information could be misconstrued or given too much weight by readers. I refer to David's comments about our eyes-on description being the most critical information for potential buyers of a projector. The key takeaway from this article is to implore people to not get too hung up on the number, which really doesn't tell you the most important part of the story (the gradient) regardless of how its calculated.
Chris Posted Oct 19, 2018 12:45 PM PST
One thing should be mentioned: deficiencies in brightness uniformity will become more visible when using high gain screens, especially ALR, where the more intense light at the center can accentuate the hot spotting affect. For example, when pairing an Elite Cinegrey 5D ALR screen with the UHD60/65, the poor uniformity becomes much more apparent.
Bruce L Posted Oct 25, 2018 9:54 AM PST
Appreciate the insight. I was recently measuring uniformity in comparing screens (I had a CineGrey 3D that didn't work with my short throw ratio). I also found that even though the measured uniformity from seating was awful (~20% center to edge) it wasn't that bad in practice except on either very dim or extremely uniform (e.g. pure white) scenes where it was noticeable. I now have a screen with less ALR (carls grey) that I measure as 60% uniform, and it's a good balance. As you said, in practice our eyes don't notice lack of uniformity that well as long as it's gradual.

In my case the projector is quite uniform, so I was looking mostly at screen loss as viewing angle changes, but I think the same argument applies.
David Theil Posted Aug 19, 2019 6:00 PM PST
Why not (std deviation)/mean , where the standard deviation is over every pixel in the scene array?

Post a comment

Enter the numbers as they appear to the left