DHP Presents Citizen Cob 1812 3500k 80CRI Vs 1812 3500k 90 CRI

alesh

Well-Known Member
Sorry @The Dawg for hijacking.

Yea, that's why you consider lm/W as well instead of isolating them, which would be useless.

Also, perhaps you don't understand what the graphs are or what I was trying to explain. If comparing their output on that graph was useless, it would have never been published.
You need to do more than to consider luminous efficacy (lm/W) of each CCT/CRI combination. Look at those graphs. They're normalized to 100%. You can't directly compare areas under the curves. Or you can but it's basically useless. You can read more there. Also applies to @Metacanna 's post.

When using this method keep in mind that there are 2 important numbers, QER and LER. They will be different for each of the CRIs, with higher CRI having lower QER which makes up for the lower lumen/w rating.
Higher CRI LEDs tend to have lower LER and higher overall QER (although QER in the 400-700nm range is usually lower) than their lower CRI counterparts.
 

DonnyDee

Well-Known Member
You need to do more than to consider luminous efficacy (lm/W) of each CCT/CRI combination. Look at those graphs. They're normalized to 100%. You can't directly compare areas under the curves. Or you can but it's basically useless. You can read more there. Also applies to @Metacanna 's post.

I think you're still missing the point, being normalised to 100% makes it simpler to consider the relative output that each cob has - but seems like you're just being overly pedantic and trying to exaggerate how "useless" it is. Not gonna feed the troll :bigjoint:
 

alesh

Well-Known Member
I think you're still missing the point, being normalised to 100% makes it simpler to consider the relative output that each cob has - but seems like you're just being overly pedantic and trying to exaggerate how "useless" it is. Not gonna feed the troll :bigjoint:
The graph being normalized to 100% means that area under each curve is scaled differently. If we look at your original post I replied to, the statement directly above the first image is wrong. While 4000K has the most area when normalized to 100%, in reality 5700K has higher output and higher efficiency. Same goes to the 90CRI graph where 5700K is the most efficient one. You're right about 4000K/90CRI having better efficiency than 3000K/90CRI. For the wrong reasons, though.
 

Randomblame

Well-Known Member
The graph being normalized to 100% means that area under each curve is scaled differently. If we look at your original post I replied to, the statement directly above the first image is wrong. While 4000K has the most area when normalized to 100%, in reality 5700K has higher output and higher efficiency. Same goes to the 90CRI graph where 5700K is the most efficient one. You're right about 4000K/90CRI having better efficiency than 3000K/90CRI. For the wrong reasons, though.

Damn, nothing to contradict on this vulcan logic.
But it's hard to understand for somebody... Especially if these are not ready to admit their own mistakes..
 

Metacanna

Well-Known Member
The discussion is interesting, would be a shame to it let it die for ego reasons. It would be nice to read some explanation on how normalization affects the reading of the graphs. Thanks!
 
you guys ever use a pur meter?
The discussion is interesting, would be a shame to it let it die for ego reasons. It would be nice to read some explanation on how normalization affects the reading of the graphs. Thanks!
Agreed. This is the first I've heard of "Normalization of the Graph".
This is how I do it.
I use the graph as a reference point. I buy some chips. I install said chips. I test the chips on a 3 month Grow, start to finish.
I report my findings.
So far I've tried the 3000k 80 CRI, 4000k 80CRI, 4000k 80 CRI with 1750k finish. Now I'm at 4000k with 1750k being used as sunset sunrise sim with 1750/4000 through flower.
Arguing datapoints on paper is useless as....THESE LIGHTS ARE DESIGNED FOR HUMANS NOT HORTICULTURE.
Real world data trumps paper.
 

sixstring2112

Well-Known Member
The graph being normalized to 100% means that area under each curve is scaled differently. If we look at your original post I replied to, the statement directly above the first image is wrong. While 4000K has the most area when normalized to 100%, in reality 5700K has higher output and higher efficiency. Same goes to the 90CRI graph where 5700K is the most efficient one. You're right about 4000K/90CRI having better efficiency than 3000K/90CRI. For the wrong reasons, though.
pay attention fellas.one of the smart ones^^^^^ around here (:
 

DonnyDee

Well-Known Member
pay attention fellas.one of the smart ones^^^^^ around here (:
Doesn't help if he's a condescending dick about it ;) His suggestion that my comparison was useless is so ridiculously pedantic. I understand what he's getting at and I don't disagree with him - if you're looking for an accurate comparison, the graphs are meaningless. But to say they're useless is being overly pedantic unnecessarily.
 

Greengenes707

Well-Known Member
Ya. I'm happy with the results. My 1200μmole contradicts the 89 Lm/w that Bridgelux claims. Another user on another site lost his shit when I posted the results claiming I was "skewing" them somehow. His claim is that an 80+ lumen/w chip can not produce 1200 micromoles period. Even after I made a video par testing it he still got pissy. But he's a Cree "fanboy" and nothing beats a Cree right? Lol.
You are interpreting the data incorrectly. I don't care what went on between you and whoever, just that the info is right. Sounds like his story has some issues too.

Your par meter doesn't correlate, confirm, or deny any lm/w figure. A par meter is an instantaneous spot measurement that does not represent total output. Lm/w is a breakdown of total output and the input wattage so is at least correlated with photons via it's spd breakdown.
Lux is the photometric equivalent to PPFD, what a par meter expresses.

It is true that the chip in your case, is not producing 1200µmols of light...and it has nothing to do with 80lm/w or any lm/w.
The total output of light in photons is known as PPF, and is a finite figure not dependent on other variables like distance or reflectivity. And is not measured by a par or quantum meter.
What is happening (for all par meters) is that the sensor is experiencing a situation that extrapolates out to 1200µmols/sec/m2 based on uniform spread of that intensity over a m2. If you were to get more accurate measurements of the whole claimed illuminated plane(m2) on a cm or mm bases, you could calculate a total figure, and is how machines like goniometers work. But that is not what single spot measurements like a par meter are or represent.
Apogee does make a multi sensor(~4 sensors) wand that is a closer representation of full output.

The HF does measure well for reds and blues and from what I have tested will measure up to 780nm and down to 335nm. Apogee has a significant decrease after 680nm and below 400nm whereas the licor sensor will pick up to at least 1200nm. They are all +/- 5-10% of each other and for growing, we only care about the 100s for significant digits. HF is far cheaper than the other two and more than adequate for the somewhat hostile environment a grow area can present. It comes down to the bucks you want to throw at it, but keep in mind that a par meter is the tool you use for spot checking your light getting to the leaves.
What quantum sensor are you using that picks up past 720nm?
The licor sensor is a basically quantum neutral(even photon readings) because it is energy weighted. But it is bound to essentially 400-700nm(~390-720nm technically).
https://www.licor.com/env/products/light/quantum.html

Apogee's basic sensor is the same that many OEM like hydrofarm use. It's the one with the major fall off at ~660nm.
Apogee's new sensor is much closer to licor, covering down to 400nm, and so close(692nm) to the full 700nm.
https://www.apogeeinstruments.com/full-spectrum-quantum-sensor/

Yea, that's why you consider lm/W as well instead of isolating them, which would be useless.

Also, perhaps you don't understand what the graphs are or what I was trying to explain. If comparing their output on that graph was useless, it would have never been published.
That is not enough. You need an un-normalized graph...aka absolute/total output. And is what is was trying to tell you.
All curves in the graph are given for TJ 85ºC, at 1620mA, so I guess you can compare them objectively.

Edit: But I will admit my first thought as the same. Then I noticed this "TJ 85ºC, at 1620mA" on top right of each graph.
They are normalized to one. Meaning whatever the highest output point is for each curve is...gets set to 1. This skews the representation of absolute/total output(not ratios) of bands. 4k has as much total red output as 3K...but more blue so by ratio it comes in as cooler. But is not less red as many think. Your plants are getting the same amount of absolute/total red.
 

DonnyDee

Well-Known Member
That is not enough. You need an un-normalized graph...aka absolute/total output. And is what is was trying to tell you.
Thanks. Would considering the curves, irrespective of them being represented on different scales on the Y axis, then not be able to provide some sort of comparison to the equivalent output of different wavelengths? That's how I understood it at least - the ratio of different wavelength ouptut in comparison to the wavelength with the highest output. Maybe I didn't explain myself eloquently, but I somehow can't accept that these graphs are useless and tell us nothing about the relative spectral ouptut of these different models? I understand that they cannot be directly compared as we don't know what the 100% is quantified as in each scenario, but when comparing the different curves, which have been normalised, wouldn't greater area under the curve (I guess curve "flatness" probably would have been more representative of what I was trying to explain) would then suggest a more even response, without as much difference between the highest output wavelengths and lowest Image here
 

alesh

Well-Known Member
Thanks. Would considering the curves, irrespective of them being represented on different scales on the Y axis, then not be able to provide some sort of comparison to the equivalent output of different wavelengths? That's how I understood it at least - the ratio of different wavelength ouptut in comparison to the wavelength with the highest output. Maybe I didn't explain myself eloquently, but I somehow can't accept that these graphs are useless and tell us nothing about the relative spectral ouptut of these different models? I understand that they cannot be directly compared as we don't know what the 100% is quantified as in each scenario, but when comparing the different curves, which have been normalised, wouldn't greater area under the curve (I guess curve "flatness" probably would have been more representative of what I was trying to explain) would then suggest a more even response, without as much difference between the highest output wavelengths and lowest Image here
Agreed if you put it this way.
What I called useless was comparing areas under the curves to determine/compare efficiency.
 

Photon Flinger

Well-Known Member
You are interpreting the data incorrectly. I don't care what went on between you and whoever, just that the info is right. Sounds like his story has some issues too.

Your par meter doesn't correlate, confirm, or deny any lm/w figure. A par meter is an instantaneous spot measurement that does not represent total output. Lm/w is a breakdown of total output and the input wattage so is at least correlated with photons via it's spd breakdown.
Lux is the photometric equivalent to PPFD, what a par meter expresses.

It is true that the chip in your case, is not producing 1200µmols of light...and it has nothing to do with 80lm/w or any lm/w.
The total output of light in photons is known as PPF, and is a finite figure not dependent on other variables like distance or reflectivity. And is not measured by a par or quantum meter.
What is happening (for all par meters) is that the sensor is experiencing a situation that extrapolates out to 1200µmols/sec/m2 based on uniform spread of that intensity over a m2. If you were to get more accurate measurements of the whole claimed illuminated plane(m2) on a cm or mm bases, you could calculate a total figure, and is how machines like goniometers work. But that is not what single spot measurements like a par meter are or represent.
Apogee does make a multi sensor(~4 sensors) wand that is a closer representation of full output.


What quantum sensor are you using that picks up past 720nm?
The licor sensor is a basically quantum neutral(even photon readings) because it is energy weighted. But it is bound to essentially 400-700nm(~390-720nm technically).
https://www.licor.com/env/products/light/quantum.html

Apogee's basic sensor is the same that many OEM like hydrofarm use. It's the one with the major fall off at ~660nm.
Apogee's new sensor is much closer to licor, covering down to 400nm, and so close(692nm) to the full 700nm.
https://www.apogeeinstruments.com/full-spectrum-quantum-sensor/


That is not enough. You need an un-normalized graph...aka absolute/total output. And is what is was trying to tell you.

They are normalized to one. Meaning whatever the highest output point is for each curve is...gets set to 1. This skews the representation of absolute/total output(not ratios) of bands. 4k has as much total red output as 3K...but more blue so by ratio it comes in as cooler. But is not less red as many think. Your plants are getting the same amount of absolute/total red.

Part of your response to Donnie is wrong, there is no extrapolates as you put it, happening in a measurement. It is just a measurement plain and simple.

Depending on the filter technically a quantum sensor could pick up photons of any wavelength since it is just a simple counter. The licor sensor I used to pick up the 1200nm was an older 190 (late 90s iirc) which probably doesn't have the same filters they are using today. Testing was done using discrete laser diodes at 720, 730, 760, 780, 1210nm wavelengths for the reds and some UVA 335nm LEDs I had in the lab. Licor registered them all, Apogee none and HF up to the 780nm.

I wanted to see if there was a significant enough a difference in the models for hobby growing and realistically I couldn't determine one performance wise that justified one over another. As I said, the tool is used determine the amount of photons available at a spot which each of them do just fine.
 

Greengenes707

Well-Known Member
Part of your response to Donnie is wrong, there is no extrapolates as you put it, happening in a measurement. It is just a measurement plain and simple.

Depending on the filter technically a quantum sensor could pick up photons of any wavelength since it is just a simple counter. The licor sensor I used to pick up the 1200nm was an older 190 (late 90s iirc) which probably doesn't have the same filters they are using today. Testing was done using discrete laser diodes at 720, 730, 760, 780, 1210nm wavelengths for the reds and some UVA 335nm LEDs I had in the lab. Licor registered them all, Apogee none and HF up to the 780nm.

I wanted to see if there was a significant enough a difference in the models for hobby growing and realistically I couldn't determine one performance wise that justified one over another. As I said, the tool is used determine the amount of photons available at a spot which each of them do just fine.
It is extrapolated...calculated...put to an algorithm///whatever you want to call it. But it is true...there is not "1200µmols" falling on the sensor when it reads 1200µmols. The sensor is ~1sqin...a square-meter is 1550sqin. The quantum meter reads out a figure for a m2...not the measurement of the single sqin.
The 1200µmols is extapotated out under the assumption of constant illumination equivalent to what the sensors experiencing for a whole m2(aka the other 1549sqin not measured).
 
Top