Saturday, October 9, 2010

3D information can still be extracted from an image taken by an imaging device such as a camera. By knowing the physical processes that took place in converting a 3D information into a 2D. A location of an object in a real world can be represented by a three dimensional vector. However, in computer graphics, to further help the modeling of the object, the coordinates must need to be homogeneous. It is simply transforming a nx1 vector into an (n+1)x1 vector and multiplying the real world coordinates with the nth element. In the image space, since it is only limited to 2 dimensions, representing it in homogeneous coordinates will change it from a 2x1 vector into a 3x1. Since image representations of the real world are augmented vector space vectors, a non-linear relationship between the image location and the real world location can be obtain, and it is given by,


where Pi are the homogeneous coordinates of the object in the image, Pgo, on the other hand are the homogeneous coordinates of the object in the real world, and A is the transformation matrix, intrinsic to the imaging device, that involves a translation matrix, that gives the translation of the object origin of the real world coordinate system to the image plane origin, and a rotational matrix, that gives the rotational matrix needed to coincide the real world coordinate to the image-centric coordinate system.
From the equation given above, we can solve the image coordinate of a the object in terms of the the real world coordinates of the object. The image coordinates are given by,


Now by setting a34 = 1, we can separate the a's and solve through systems of linear equations. The separated a's are given by the equation,


From the equation above, for a single point in an image plane, we have 11 a's that are unknown. To solve this problem, we must have points greater than the number of unknowns.
In calibrating imaging device used in capturing the image, we used a Tsai grid, shown in Figure 1, to get the intrinsic a's.

Figure 1. Tsai grid used in calibrating the camera used in taking this picture.

The size of each square block are known, so from this image, we can get 11 real world points that specifies the xo, yo, and zo, and also from these points we can specify the image location. The previous equation can be represented by,


where Q is the left most matrix from the previous equation, a is the matrix of a's, and p is the right most matrix in the previous equation. The a's can be solved by the second equation showed above. To create Q, we must stack the left most matrix we obtained from each point. The same must be done to the right most matrix so that p can be constructed.
Now that we have the a's of the camera, we can now transform image coordinates to real world coordinates and vice versa. To test if the a's obtained are correct, a real world coordinate was picked and was then displayed on the image. Table 1 shows the real world coordinates used for the verification of the calibration and Figure 2 shows the transformed points.

Table 1. Real world coordinates and its corresponding image coordinates.

Figure 2. Tsai grid displayed with the transformed real world coordinates.

In summary of this article, we have calibrated an imaging device and verified if the calibration is correct by transforming a real world coordinate to an image coordinate.

Reference
  • Dr. Soriano. Applied Physics 187 activity 8 manual.

Saturday, October 2, 2010

Gamut gamut gamut!

Our eyes can only detect a finite number of colors. These colors are defined by the sensitivity of the cones in our eyes. One institution that defined these sensitivities is the International Commission on Illumination (CIE). The CIE Standard Human Observer is shown in Figure 1.

Figure 1. CIE standard human observer. The plot shows the sensitivities of each rod on different wavelengths in the visible region.

From these sensitivities, a color gamut can be constructed to show all the possible colors that can a standard human can observe. Using the equations below we can calculate the boundary of colors that a standard human eye can observe.


P is the emittance spectra of the light source and X, Y, and Z is the sensitivity of the three cones of a standard human eye. Here we used a Dirac delta function centered at each wavelength in the visible region. This will represent a light source that has a single and unique color. Applying this method we'll get the color gamut of a standard human observer shown in Figure 2.

Figure 2. Color gamut of a standard human observer.

The objective in this article is to show how large is the color gamut of a certain display and how compare it to the color gamut of a standard human eye.
To construct a gamut of a certain display, we will need to get the emittance spectra of it while displaying a red, green and blue color. This will replace the Dirac delta function we used for the P. Then we plot the x and y then plot it with the color gamut of a standard human eye.
We used a Toshiba laptop LCD and a Epson projector for the displays. We made it display a red, green, blue, white and black color.

Figure 3. Color gamuts of a Toshiba laptop LCD and Epson projector.

We can see that the color gamuts of both the laptop LCD and projector is smaller compared to the color gamut of a standard human observer. Also we can see that when both the displays are made to show a black and white color, it is located at the region where red, green and blue are equal in magnitudes.
In summary of this article, we have showed the boundary of colors that a standard human eye can observe. Also, color gamuts of two displays are compared to the standard human observer.

Reference

Light and Matter Relationship

Light and matter react with each other in different ways. Light, with its numerous wavelengths, travel in space and interact with anything it hits, mainly matter or another light. In general, light and matter interact with 3 ways, transmission, reflection and absorption. From this interaction and the rules of Physics, numerous phenomenons appear in the environment.

When considering these processes, we must look at the properties of light as a wave and matter in the atomic scale. Light waves have their corresponding wavelengths and intensity. This corresponds to a certain energy level. When looking at matter in the atomic scale, we know that atoms contain electrons in a certain orbital in reference to the nucleus. These orbitals have certain energy levels.

Let us consider a single wavelength light and a single atom with a single electron matter. Now, when light with the same energy level as the electron of a certain atom hits the matter, the energy of light is absorbed by the electron, the electron vibrates and releases the energy in other forms other than light (e.g. heat). Now, if the energies do not match, the electron will vibrate for a short period of time and release the energy in the form of light as well. If the object is transparent, the vibration is passed from atom to atom, and finally released on the other side. This corresponds to transmission. But if the object is opaque, electrons on the surface vibrates and releases light on the same side as the source, corresponding to reflectance. Now, if we scan the wavelength of light for the visible range, we can create a profile of how matter will react with light sources of different wavelengths.

Some samples in our everyday environment can demonstrate these processes.

Figure 1. Brightly Colored Leaf

Here we can see how absorption and reflection occurs in everyday objects. The brightly colored leaf reflects red to pink and absorbs the rest. This occurs in almost all objects. It can be also seen in the background image of the wall. The color we see is the reflected wavelengths while everything else is absorbed.

Now, reflection can be classified into 3 more sub-classifications, specular, body and interreflection. Specular reflection, also known as glossy reflection is the reflectance of almost the whole light source due to the angle of incidence. Body or matte reflection is the reflection after absorption has occurred. And interreflection is the reflection of light from a secondary object.

Figure 2. Reflection of the street on the side of a car

Figure 3. Reflection of light off a handkerchief to a wall
In figure 2, we can see all three types of reflection. First, lets examine the strip of metal on the side of the car. The different shades of silver shows the specular reflection (brightly colored silver) and the body reflection (slightly darker silver). Now, interreflection can be seen on the side of the car. Since the car paint is very glossy, the interreflection shows a clear image of the yellow line in the parking lot. Meaning light from the environment hits the yellow line, the yellow line reflects the colors it does not absorb to the car and the car reflects to the camera. This is slightly confusing, so lets take a look at figure 3. In figure three, we see the body reflection of the handkerchief as red and the body reflection of the wall as white. The interreflection can be seen as the red tinge on the wall coming from the red handkerchief.

Now, let us examine some images on transmission.

Figure 4. Transmission of light from an LED through a pane of glass

As discussed earlier, we can see that the transparent object transmitted the light from the side of the light source to the opposite side. Since the pane of glass is highly transparent for almost all wavelengths, we can see the light from the source very clearly and with the same color as the source. Now, most filters with certain colors reflect and transmit the same wavelengths. But there are some objects designed to transmit a different wavelength from the reflected wave.

Figure 5. Front view of a Dichroic Filter

Figure 6. Rear view of a Dichroic Filter

From the front (figure 5), we can see that the filter reflects almost all wavelengths with slight tinges of blue. This is the reflection part. From the rear (figure 6), we can see that the filter transmits red relatively more. This is very useful for museums and galleries where red light (longer wavelengths) can heat and damage the paintings.

Figure 7. Diffraction and interference of light through gratings

Now, let us consider a grating. A grating can be a transparent object with a series of evenly spaced opaque strips. When light hits an opaque strip, each point on the strip acts as a new source of light. When light transmits, light from each new point source interferes with each other producing either bright lights (constructive interference) or dark lights (destructive interference). From optics classes, we know that the equation for grating interference is as follows:


where d is the grating separation, theta m is the angle from the central axis, m is integer specifying the modes and lambda is the wavelength of the incident light. This determines the angles at which constructive interference occurs. As we can see, interference is dependent on the wavelength of light. This is because diffraction gratings are determined by the phase difference due to path difference to create constructive and destructive interference. As we can see in figure 7, light is diffracted differently per wavelength.

So how can we get the transmission, reflection and absorption profiles of different objects? All we need is the color signal of the object and the color signal of the light source on something white. The color signal of the object is thus the reflection of the object with the light source used. So by dividing the color signal of the object by the color signal of the light source on white. This will result to the reflection profile of the object. If we subtract that to 1 (when the reflection profile is normalized), will get the transmission/absorption profile. So how do we know if the profile we get is the absorption profile or the transmission profile? The simplest way to know is to look at the object. If the object is opaque, then it is the absorption profile. If the object is transparent, then the result is the transmission profile. So, lets look at different transmission and absorption of different objects.
Figure 7. Graph of Reflection and Absorption of a Black Wallet
Figure 8. Graph of Reflection and Absorption of a Blue BPI Card
Figure 9 Graph of Reflection and Absorption of a Green Mini-Guitar
Figure 10. Graph of Reflection and Absorption of a 5 Peso Coin
Figure 11. Graph of Reflection and Absorption of a 20 Peso Bill

Figure 12. Graph of Reflection and Absorption of a 100 Peso Bill


This can be verified by confirming the color of the object. We can see that the reflectance and absorption profiles correspond to the object colors. Also, we can see the limitations of the detector in the "noisy" part of the profiles. We can see distinguishable noise in the regions wavelength less than 400 nm and wavelengths greater than 650 nm.

References:
Wikipedia: The Free Encylopedia
The Physics Classroom

Thursday, September 30, 2010

Turn on the lights please!

Light. What actually is light? Light is an electromagnetic radiation of any wavelength. Light has very interesting properties. It has both properties of a particle and a wave. Light is also represented as a photon, a massless elementary particle which is a carrier of electromagnetic force.
There are many ways on how photons can be generated. It can be from a flashlight, a candle, an LED, or thermal source or a black-body, like our sun. Light sources, may generate photons of different frequencies or wavelengths. Some of this wavelengths are in the visible region of the electromagnetic spectrum. This is the reason why we see colors from light sources like a tungsten-halogen lamp which is orangey in color.
The objective of this article is to show different emittance spectra of different light sources and also to show the emittance of an idealized black-body at the visible region in different temperatures.

Light Emitting Diode
Light Emitting Diodes or LEDs are common nowadays. Due to the development on semiconductor technology, production of these light source devices came cheap. It also requires low power for operation which why it is now commonly used as flashlights and sometimes in portable lasers.
In Figure 1, we can see the emittance spectra obtained from a Gallium nitride based LED flashlight. Figure 1a is the emittance spectrum obtained from our experiment while Figure 1b shows the spectrum obtained from another experiment.

(a)
(b)
Figure 1. Emittance spectra of GaN based LED flashlight. (a) Our experimentally obtained emittance spectrum and (b) the emittance spectrum obtained from a different experiment of the white LED.

As we can see both the experimentally obtained spectra are similar to each other. Both have peaks near the 450nm which represents the GaN emittance peak and a broadband emittance from 500nm to 650nm which corresponds to the Ce:YAG in the device. We can see that even though we see it as a white light it does not have an equal distribution among the wavelength values.

Butane lighter
Butane is a highly flammable, colorless gas that can be easily liquefied. It is used for most lighters that are commercially available. Light is produced from the lighter by free radical reaction, specifically combustion.
In Figure 2, we can see the experimentally obtained emittance spectra of the butane lighter.

(a)
(b)
Figure 2. Emittance spectra of a butane lighter. (a) Our experimentally obtained emittance spectrum and (b) emittance spectrum obtained from another experiment.

We can see that Figure2a and Figure2b are different from each other. This is because the experiment in Figure 2b was done in a oxygen rich environment. The butane burns blue due the abundance of oxygen in its surrounding. While in Figure 2a, the experiment was done in a normal room condition and the butane lighter burns orange instead of blue. However, the peak locations of our experimentally obtained spectrum was close the peak values of the one in Figure 2b. This was still expected because the same chemical reaction are still taking place and the energy barriers that must be overcome for combustions are still the same.

The following part show the emittance spectra of both laptop LCD and a projector displaying white, red, green, and blue. From here we will observe the behavior of the laptop LCD and the projector while displaying the said colors.

White
In Figure 3, we can see the emittance spectra of both the LCD and the projector. We can see that the two spectra are very different.

(a)
(b)
Figure 3. Emittance spectra of both (a) laptop LCD and (b) projector displaying a white color.

It can be observed that the LCD emits more yellow than blue wavelengths. The opposite is observed for the projector, minimal emittance on the yellow to red wavelengths is observed while the blue to violet wavelengths are emitted significantly. The only similarity is that both are emitting significant amount of green wavelengths.

Red
Now we let both the LCD and the projector display a red color. The emittance spectra of the two are shown in Figure 4.

(a)
(b)
Figure 4. Emittance spectra of both (a) laptop LCD and (b) projector displaying a red color.

The result of the experiments was expected. Since both are displaying a red color it was expected that the emittance spectra will have high values in the red to yellow region and minimal emittance on other regions. However for the LCD, we can see that it emitted a significant amount of green-yellow wavelength.

Green
Again, the LCD and projectors are made to display a green color. Figure 5 shows both the emittance spectra of the two.

(a)
(b)
Figure 5. Emittance spectra of both (a) laptop LCD and (b) projector while displaying a green color.

Again, the emittance spectra was expected to have high values on the green to near green wavelengths. However, the LCD has discrete emittance on the blue and yellow wavelengths while the projector has broadband emittance from the blue th near yellow wavelengths.

Blue
The LCD and projector was lastly made to display a blue color. Figure 6 shows the emittance spectra of both of the displays.

(a)
(b)
Figure 6. Emittance spectra of both (a) laptop LCD and (b) projector while displaying a blue color.

Again, the high value in both of the spectra was expectedly observed at the blue region. But again, discrete emittance spectra was observed at the green wavelength on the LCD and a broad spectra was observed at the blue region, which was expected. For the projector, a peak was observed at the blue violet region and a small distribution was observed at the blue region.

Black-body Radiation
A black-body is an idealized object in Physics. It absorbs all the of the electromagnetic radiation and then re-emit the radiation in a continuous characteristic spectrum. The peak of this unique spectrum is dependent on the temperature of the black-body. For light with shorter wavelength, the frequency would be high. Correspondingly, high frequency means high energy and high energy means high thermal energy of the black-body. That is why an ideal black body appears blue when it is very hot and appears red when it is relatively cold. The chromaticity of the black-body for different temperature is shown in Figure 7.

Figure 7. Chromaticity of a black-body at different temperatures.

The spectrum of the black-body at a specific temperature is given by the Planck's black-body radiation shown below.


From the equation above, h is the Planck's constant, k is the Boltzmann constant, and c is the speed of light in a vacuum. With the Planck's black-body radiation, we can get the emittance spectrum of a black-body given its temperature.
In Figure 8a shows the spectrum of an ideal black-body in the visible region that has temperature ranging from 4000K to 7000K. We can see that as the temperature rises, the peak emittance of the black-body became much more bluish.

(a)
(b)
Figure 8. Spectral energy density of an ideal black-body at different temperatures. (a) Spectral energy density of a black-body that has temperature ranging from 4000K to 7000K in the visible region. (b) Spectral energy density of a black-body that has temperature ranging from 1000K to 7000K displayed in a much wider wavelength values. The temperature of each black-body is placed near the peak value of the spectrum.

Figure 8b shows the spectrum of the black-body that has temperatures ranging from 1000K to 7000K displayed in a much wider wavelength values. We can see that at 1000k, the peak value of the spectrum is observed at 3000nm, at the infrared region. This explains why at room temperature, the black-body is black in color. Since we can't see infrared wavelengths, we do not see the emission of the black-body.

Figure 9. Spectral energy density of a black-body at room temperature.

In Figure 9, we can see the spectral energy density of the black-body at room temperature. It can be observed that the peak emittance of the ideal black-body was found at 10000nm, at a much longer wavelength. This clearly explains why a black-body appears to be black at room temperature!

In summary of this article, we have shown different spectra of light sources and compared it with values that are accepted. Also, behavior of light sources that can display different colors was investigated at different display colors. Lastly, the emittance spectrum of an ideal black-body was investigated at different temperatures and an explanation of why a black-body appears to be black at room temperature was provided.

References:

Thursday, July 22, 2010

Vision Mission

The Human Eye is our own camera. It has components and qualities similar to that of an Single Lens Reflex (SLR). It has a variable focus lens, aperture and sensors. We can now explore the properties of the Human Eye.

One basic property of cameras are focusing distance. Most lenses' specifications include their minimum and maximum focusing distance. This is where the lens can be adjusted to focus on a certain distance of the object. Similarly, the lens of the eye varies by contracting and retracting of the muscles around the lens. But, ofcourse, there is a minimum distance at which the muscle can compensate to focus on the object.

So, for this part, we explore the differences of minimum focusing distance of each eye and when using both eyes at the same time. To measure the effective minimum focusing distance of both of the eyes, a pen, with its tip, was placed squarely in front of both eyes and was then brought closer to the bridge of the nose until it can no longer be seen focused. The distance between the pen's tip to the bridge of the nose was then measured and recorded. For measuring the minimum focusing of each eye, same procedure was used except that one eye was covered. The result of the experiment is tabulated below.

Table 1. Minimum Focusing Distance of two subjects

As we can see from table 1, that each eye of both individuals have different minimum focusing distances, and when the distance was measured for both eyes, the obtained result has a value lesser than the value recorded for each eyes. This was not expected, it was expected that the minimum focusing distance of both of the eyes should have a value averaged of that the minimum focusing distance of each eye. However, the values obtained for both eyes on each subject is still acceptable, since many factors affects the focusing of both eyes, such factors are vision defects like astigmatism, nearsightedness, and farsightedness.

Let's now explore the peripheral vision of the eye. Peripheral vision, according to Wikipedia, is a part vision that occurs outside the center of gaze. In humans, peripheral vision is weaker compare to other species because the human eye has a greater concentration of receptors at the center and less at the edges. Because of this, human eyes can greater distinguish color and shape at the fovea (the region on the retina where the concentration of receptors are greater) than any other region of the retina.

To measure the maximum peripheral vision, both of the eyes are fixated to a point on the wall at a distance of 1 meter. A vertical pen was again placed squarely in front of both of the eyes and touching the wall. The pen was moved from the center to the left travelling along the wall until it can no longer be seen. The distance was then measured from the center to final position of the pen. The same procedure was repeated but the pen was moved now to the right. The data obtained is tabulated below.

Table 2. Maximum Angle of Peripheral Vision of two subjects.

According to WikiPedia, normal vision extends to around 100 degrees away from the nose or outward. We can see in Table 2 that our experimental data coincides with rough approximations.

Our next stop in human eye exploration is the visual acuity. Visual acuity is the acuteness or clearness of vision. It is a measure of the spatial resolution of the visual system, or in this case, the human eye. The common test used in measuring visual acuity is the Snellen chart.

Figure 1. Snellen chart used in measuring visual acuity. Image is taken from Wikipedia (http://en.wikipedia.org/wiki/File:Snellen_chart.svg).

For this part, visual acuity was not measured via Snellen chart, it was measured by looking how far can the eye clearly discern the letters in a line (or in a sentence) at a distance. The eye was fixated at one letter and all other letters in the line was covered. One by one, the letters was then uncovered until it can no longer be distinguishable. The distance was then recorded and the angle was computed by using basic trigonometry. The tabulated results are shown below.


Table 3. Visual Acuity of two subjects.

From table 3, both of the subjects have maximum angle of visual acuity close to 5 degrees. In search of theoretical values of normal human eye visual acuity is still in progress to validate the the significance of the obtained data.

Finally, we can explore the scotopic and photopic property of human vision. Wikipedia defines scotopic vision as the vision of the eye under low light conditions and photopic vision under well-lit conditions. Under low light conditions, cone cells are non-functional. This is why we have difficulty seeing color in dark places. Our vision is therefore governed by the rod cells which are sensitive to 498 nm (green-blue). This is comparable to the rods sensitivity which is around green wavelength (555 nm).

To characterize our eyes to its sensitivity, we fashion a box. On the inside of one end of the box, we place strips of colored paper. On the other, we create a viewing slit. A garbage bag is placed at the slit so that the viewer can bury his face in the garbage bag while viewing the colored strips without allowing external light to enter. A small hole was then placed on top to allow light from a flashlight to pass. The light source was then lowered and raised to increase the intensity of light entering the box. A run was conducted with the light being slowly lowered to increase the intensity of light. And a second run was conducted with the light being raised slowly to decrease the intensity of light.

Table 4. Scotopic and Photopic Vision

In the table 4 first run, we can see that yellow is the first color that can be distinguished by both subjects. This is to be expected since yellow and green are very close to each other in terms of wavelengths. We can see a slight discrepancy in the next few colors noticed. This can be due to the fact the appearance of color is quick thus can confuse the viewer. In the second run, it much more coherent. We can see that violet is the first to disappear. This is expected since violet is farthest from the green wavelengths.

Another property we can explore is the blind spot. Inside a human eye is a distribution of rods and cones which are the reason we can see. But in a certain area, the nerves from the sensors are bundled together and enter the body to the brain. In this area, there are no rods and cones. So when looking at a certain object with one eye, there is a small area which we cannot see.

A few test we can do to examine the existence of the blind spot is the basic x and o test. We look at an image of an "x" on the left side and a "o" on the right. If we look at "o" and move the image, there is a certain distance that the "x" will disappear. This same test can be done with a GIF image. We can create a GIF image with a stationary "x" on the right and a moving "o" on the other. If we stare at the "x" with one eye, we will notice that the "o" will appear and reappear as the "o" moves.

To compensate for this, our brain adjusts by using patterns around the blindspot. To understand this better, we can look at an image of straight vertical lines with a white circle in the middle. If we stare to one side and move the image around, we can find the spot where the white circle is in the blind spot. This can be seen when the image appears to be purely straight vertical lines. Our brain compensates the blind spot by filling in the gap.

From these experiments, we can see that the human vision has numerous properties that affect how we can see. Though color is a very subjective topic, we can see that the human eyes react to color in a very scientific way.