Why wouldn't a b/w resolution chart benefit SD9?

There are no luminance pixels on the CCD, and that's the problem
with your analogy.
Problem is, a luminance change to saturated red will affect only
the red pixels on the sensor. The green and blue pixels are
useless.
Every pixel contributes luminance information.

Look at the following diagram. We have a pure red ball shaded down to black, a CFA/Bayer pattern, and what that pattern "sees" when it looks at the ball.



You can see how the red pixels react to the ball. You can also see, however, that even the green and blue pixels are lighter on the bright side of the ball than on the darker side.

That's the trick. A green filter over a pixel well isn't just going to let in green light. It's a low-pass filter. Throw enough light on it, of any frequency, and SOME light will penetrate.

Look at it this way. I can put a 20 CCR filter on my camera. It's a red filter, right? Did that bias my color towards red? Yes. Did it stop ALL of the green and blue light from entering the lens? No.

So, yes, a given pixel's value is determined pretty much by averaging it's own value with those of it's neighbors. But unless it's pointed at something totally dark, it WILL have a value.

And if it is -- unlike it's neighbors -- pointed at something dark, it's own luminance value will pull down it's average. And what does that lower value mean? Well, you now have a bit of darker image detail at that location.

It can be a lot more difficult when you add edge detection and whatnot, but those are the basics.
 
It seems like you might have a misconception about the Bayer
pattern detecting luminance separately from R,G, B. In the example
you gave above, the remaining components of the color signal will
be interpolated from neighboring pixels. Luminance is a function
of all three of R, G, and B, so without knowing the values of the
neighboring pixels, we can't know whether the Bayer pattern would
get the right luminance.
Every pixel contributes luminance. Check the following post:

http://www.dpreview.com/forums/read.asp?forum=1027&message=3509412
 
My point is illustrated below. This shows how a Bayer
interpolation algorithm performs on a white resolution target with
black lines, and the exact same resolution target with the white
simply replaced with red (to give you a red target with black
lines).

The same Bayer interpolation algorithm was used on both, and this
example shows you what you can expect from a 6 MP Bayer sensor, and
also shows why B/W resolution targets overstate the "average"
resolution of a Bayer image:



--
Mike
http://www.ddisoftware.com
Could you compare your simulation with a real shot of a B&W resolution chart taken with a D60/D100 and a red filter (Wratten 25)?
 
Every pixel contributes luminance information.
That is simply not always true. The fact that the actual filters over each pixel filter light on a bell curve and are not perfect (they overlap some), doesn't change anything but the magnitude of the problem. I agree that it will be rare to find a frequency of red light that doesn't excite the green or blue sensors at all and is only reclecting that one frequency, but you can come very close. But the issue here is not whether the green and blue pixels can all be black/zero or not, it is whether or not the green/blue pixels are picking up enough information to contribute to the interpolation! In many cases, in real images, they do not.

If you look at the filter response of the red, green, and blue filters on a CFA, you'll see that there are frequencies in the range of each filter that are far enough from the other filters that background noise excites the photo diodes more than the light making it through at the wrong frequency. That means that there are some frequencies of light in which only the red pixels will gather information, and the portion of that red light that makes it through the imperfect blue/green filters is negligible; that is, it is outside the usable quantization or noise range of the sensor.

In the end, the greater the separation in your red, green, and blue pixel values, the less you'll be able to predict the real value for luminance at each individual pixel (which is what dictates resolution). You may pick a red that is not pure red or is intense enough to pass through green or blue filters and build up some charge on the sensor, but if your red channel is 90% of maximum and your green and blue pixels are 10%, the contribution to image resolution by the green and blue pixels is nearly useless to a demosaicing algorithm.

--
Mike
http://www.ddisoftware.com
 
It seems like you might have a misconception about the Bayer
pattern detecting luminance separately from R,G, B. In the example
you gave above, the remaining components of the color signal will
be interpolated from neighboring pixels. Luminance is a function
of all three of R, G, and B, so without knowing the values of the
neighboring pixels, we can't know whether the Bayer pattern would
get the right luminance.
Every pixel contributes luminance. Check the following post:
Of course it does. I never said anything to the contrary.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Every pixel contributes luminance. Check the following post:
Of course it does. I never said anything to the contrary.
To "contribute" to luminance, you have to have something of value to contribute. The problem with the above is that while it might be true in a theoretical sense, the portion of the luminance that each red, green, or blue pixel contributes to the actual luminance of the pixel varies by a huge amount depending on the color being sampled. The greater the separation in the contributions of red versus green/blue or blue versus green/red, the closer you get to actually only being able to resolve 1/4 of the sensors total resolution because the contribution that they can make to the interpolation is minimal.

Will you ever see a case where only the red pixels on the CCD contribute any luminance data? Sure. It happens all the time. There is a quantum efficiency curve for each color filter over each pixel. Take a look at the red, green, and blue curves on this graph as an example:



The point at which you only have data in the red channel and you have none in the blue/green channel is simply the point at which a given frequency red still has enough quantum efficiency to give you a meaningful voltage on the photo sensor while the blue/green sensors are not capturing enough voltage to bring them up above the noise ceiling.

It happens all the time... in almost every image with a wide variety of colors.

--
Mike
http://www.ddisoftware.com
 
Every pixel contributes luminance. Check the following post:
Of course it does. I never said anything to the contrary.
To "contribute" to luminance, you have to have something of value
to contribute.
I think it's a disinction between what happens in specific images and what happens mathematically. The luminance is a function of the R, G, B values. Thus, every pixel that contributes an R, a G, or a B contributes mathematically to the luminance signal.

Sometimes the value plugged in to luminance equation is something very close to 0. This is something that is a function of the specific image that's hitting the sensor.

If we're clear about whether we're talking about specific images or equations, I think everybody can more or less be in agreement.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
It happens all the time... in almost every image with a wide
variety of colors.
Personally, I think your argument would be have far greater weight if you showed realistic images where this happens as opposed to playing with resolution charts. If, as you say, it happens all of the time, then it should be easy to build a page full of real world samples.

--
Erik
 
First, you should check out the following site on interpolation:

http://ise.stanford.edu/class/psych221/99/tingchen/main.htm

The essence of a bayer interpolation is that, for a green pixel, you have to use adjacent pixels to solve for the missing blue and red values. For red, you have to solve for green and blue, and you need to find red and green for blue.

A red pixel will, however, usually contribute a great deal to the red component of its own RGB triplet. Same for blue and green.
But the issue here is not whether the green and blue pixels
can all be black/zero or not, it is whether or not the green/blue
pixels are picking up enough information to contribute to the
interpolation! In many cases, in real images, they do not.
This is true to a certain extent, but even the lack of information is information. The fact that no interpolated green/blue value exists for a red RGB triplet contributes directly to the purity of that color.

OTOH, a 24 bit output pixel can be one of 16 million distinct color values, of which only 768 are completely pure r, g, or b tones.

A simple change in texture will directly affect absorption and reflectance, absorbing or scattering light of other colors. You don't want to know what the odds are of a red pixel having NO additional G/B components in all of the adjacent pixels that are going to be sampled.

And if there IS absolutely no variation in the adjacent channels, then the odds are VERY high you're looking at a smooth tone with no texture to resolve to begin with.
... if your red channel is 90% of maximum and
your green and blue pixels are 10%, the contribution to image
resolution by the green and blue pixels is nearly useless to a
demosaicing algorithm.
That's incorrect. A shift in color can convey nearly as much information regarding image detail as a shift in intensity. A shift in a green pixel of just 10% could potentially shift its adjacent red value from 255,0,0 to 255,25,0. And you will see that variation in color tone as detail.

And most of the above is just for relatively straight-forward algorithms. Some will pull all of the data from all of the sensors and first build a pure luminance map of values, then overlay chroma onto the map.

Finally, the chart from FillFactory is interesting...



...as it shows their sensor has a much broader cross-over between channels than the one you showed earlier. Obviously there are major differences in sensors and the algorithms used to manage them.

As I've said before: it's a hack. But it's also a very, very elegant one. The amazing thing -- to me anyway -- is that it works at all.
 
Michael,

Funny you should show the Fill Factory chart, I have been playing with it to compare it with the Foveon Chart from their patent (that is the only chart for the X3 I have seen, I would be happy to use a better, more up to date chart if there is one).

One of the issues for the X3 is that while silicon will naturally separate colors by depth, it does not do it very well, or at least nearly as well as color filter.

One of the big FALLACIES I have seen in many of the X3 to Bayer comparisons is the IMPLIED assumption that the X3 is getting as good a sample as the Bayer Filtered pixel. The “assuming all other things are equal” is not true, at least today.

First and most obviously is the fact that the Bayer cameras have more “pixels” (yes, I agree that Bayer overstates their “pixels”). Still I don’t think this is an accident that X3 came out with about ½ the number of pixels, but it is a trade-off that Foveon had to make with the realities of making their sensor. If they made the pixels any smaller, they would have really killed their light sensitivity.

Second, Foveon X3 is not getting as good a sample in each color as a given Bayer sensor is getting and thus the reason for the ISO400 limit on the SD9. Noise/inaccuracy of the sampling and Resolution, are related. If I have a noise image, I can do things like Median filtering to reduce the noise, but doing so will also hurt resolution.

The charts for a Bayer (as produced by Fill Factory) and Foveon X3 (from their patent) are shown below at the pointer below.

http://www.fototime.com/ 4E9D3EB2-0C8C-4874-9BF6-A933F7EB61D4} picture.JPG;

I'm would expect that the Foveon charts from their Patent is just an approximation and has it has no scale for the vertical axis. The Fill Factory chart appears to be from some measured data. Also of interest to some is that if one goes out to Infrared (about 700 non-meters) the sensors go back up in their sensing even with color filters and thus why their are IR filters in the cameras.

I thought it would be instructive to overlay the Foveon response pattern with the Fill Factory pattern which I have done below (stretching the Fill Factory pattern to "fit" the Foveon Patent). I have added some color to the Foveon black lines to indicate the various colors. I have also put a chart at the top showing the colors (very approximately) for the various wavelengths.

http://www.fototime.com/ {A57CD661-72D6-4613-AD8F-8B75CAC07CB0} picture.JPG

What one should notice is how poorly, relative to the Bayer (by Fill Factory) that the X3 appears to separate the colors. The “color” of a given pixel has to be computed as a function of the other color sensors. Notice for the Foveon how little difference there is in the Blue and Green responses and that there is still a significant amount of each color detected in the other sensor. Very roughly speaking, the amount of crosstalk between the colors detected by the various sensors has to be subtracted out to get a given color. The more the curves overlap, the harder it is to determine the actual color.

The Bayer filter is not perfect either, but it certainly much better separated. As Michael Long points out, there is some light from the other colors getting into each color sensor. With the information from the surrounding pixels, this information can be used to help predict the color for the given pixel even if it is in reality a different color than the filter was suppose to detect.

Thus from a color perspective, there is very little color signal for a given amount of light with the X3 method. Thus while the Foveon X3 is getting more samples of color, it would appear that EACH sample is less accurate since there is less “color signal.”

It is very clear from the samples, the fact that the conversion has controls for sharpness and contrast, plus the fact that it would be just good signal processing practice, that the X3 conversion is looking at multiple pixels to determine a given pixels R, G, and B. I would not be surprised if as the intensity of the light goes down, that the conversion weighs in more of the surrounding pixels to try and keep from going off on a tangent in terms of color. This would have the effect of reducing resolution to keep from getting the color very wrong.

So how will this show up in the images? The most obvious as already mentioned is the ISO limit which is pretty low for the SD9 compared to the competition. Note that an ISO different of 2X means TWICE the light so this would suggest that the X3 may be 2X less sensitive (this is just very rough as we don’t have noise comparisons at various ISO’s yet). The next will be how the sensor behaves in the shadows/lesser illuminated part of the image. I would expect that, depending on the conversion trade-off, that the color will be off and/or the resolution will drop. I would also expect that the SD9 could have problems determining the color in very bright areas of the scene.

I would expect that the SD9 may have trouble getting the color right depending on the intensity and the color of the light. Looking back and the chart and making the BIG assumption that the SD9’s sensor response curve is similar to the pattern, it would suggest that it may have problem detecting the difference between Cyan and Blue.

Anyway, we will eventually see when the side by side comparisons come out.
--
Karl
 
There's one problem with your comparison of spectral responses. You are basing the Foveon graph on a 4 year old patent. They have had 4 years to improve on their patent before they actually implemented their design, so the X3 sensor's color response may actually have little resemblence to your chart.

--
Mike
http://www.ddisoftware.com
 
Sorry to rain on your crusade, Karl, but the overlap doesn't hurt your ability to recognize colors as long as every distinct hue can be mapped to a distinct R, G, B combination in the sensor's space. There is a practical issue in constructing transformation between these spaces but as I've pointed out to you many times, the problem of mapping between spaces is a common problem with well-known solutions.

Your claim that the sensor would have trouble distinguishing between cyan and blue is just wrong. From the very graphs that you provided, it's clear that for cyan, the sensor will see some red, and much more green than blue. While for blue, the sensor will see no red, and much more blue than green.

More formally, all we need for the sensor to do its job in terms of resolving correct hues is for there to be a bijection between hues in the sensor's space and hues in perceptual space. We can think of each color's response curve as a basis function. What matters is that the three layers form a basis that spans the perceptual space. If they do, we can do a change of basis into, e.g., sRGB.

Note that some overlap is necessary even in a Bayer pattern solution if colors such as yellow are to detected at all. If you want to construct an argument that is more compelling, you will need to argue that because the basis functions for the X3 sensor are further from orthogonal than a "pure" Bayer pattern set, you may have trouble spanning a wide range of saturations for some colors given the limited dynamic range of the sensor.

I could be persuaded by such an argument, but this is a relatively subtle argument that will depend upon a lot of data we just don't have. Until then, I don't think it serves anybody to spread FUD.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
There's one problem with your comparison of spectral responses.
You are basing the Foveon graph on a 4 year old patent. They have
had 4 years to improve on their patent before they actually
implemented their design, so the X3 sensor's color response may
actually have little resemblence to your chart.
First, I agree that the patent is 4 years old and that Foveon may have some "tricks" to get better separation. The Data for Bayers is published, but like so much of Foveon's "breakthough," so much is kept secret (most people with real solid breakthoughs give a bit more data and less hype). Still, I don't think the physics of silicon has changed in 4 years. If Foveon requires special doping of the silicon to get better separation, they then move farther away from "standard CMOS" and thus become more expensive.

The POINT was that it is still MOST LIKELY that the X3 cannot separate colors as well as a given Bayer sight for a given color. It is a FACT that the X3 currently has problems at high ISO, and it is LIKELY that this is ONE of the problems that they have.

Karl
--
Karl
 
Mr Parr your crusade to be Foveon's "great defender" is rather tiring.

Maybe in all your great wisdom you can explain why the SD9 is only rated to ISO400? Maybe it is a that Foveon just wanted to be nice to the Bayer guys? It would be nice if you would stop being Foveon's lap dog and realize that there are Pros and Cons to the approach. As of today, the X3 is more a technical curiousity than a breakthough that the market has to wait for.

Some overlap of the color space is certainly not a killer, but the overap is pretty large. If what you said is TRUE, then the bayer people would use "very sloppy" filters rather than trying for relatively sharp color filters. As you are probably aware, Kodak has used CMY filters to increase sensitivity at the expense of color accuracy. When there is too much overlap, then the ability to detect various colors is hurt, that is a FACT. Then you have to factor in the issues of NOISE and the ability to detect noise.

Yes it is tough to know for sure much because so far, Foveon has given out so much more hype than information. Thus those of us that have been in the Industry had our "hype detectors" go off pretty strong. If you want to believe everthing Foveon tells you, that is your problem and your ingnorance of how high tech companies overstate the benefits and understate or hide the drawbacks.

Karl
Sorry to rain on your crusade, Karl, but the overlap doesn't hurt
your ability to recognize colors as long as every distinct hue can
be mapped to a distinct R, G, B combination in the sensor's space.
There is a practical issue in constructing transformation between
these spaces but as I've pointed out to you many times, the problem
of mapping between spaces is a common problem with well-known
solutions.

Your claim that the sensor would have trouble distinguishing
between cyan and blue is just wrong. From the very graphs that you
provided, it's clear that for cyan, the sensor will see some red,
and much more green than blue. While for blue, the sensor will see
no red, and much more blue than green.

More formally, all we need for the sensor to do its job in terms of
resolving correct hues is for there to be a bijection between hues
in the sensor's space and hues in perceptual space. We can think
of each color's response curve as a basis function. What matters
is that the three layers form a basis that spans the perceptual
space. If they do, we can do a change of basis into, e.g., sRGB.

Note that some overlap is necessary even in a Bayer pattern
solution if colors such as yellow are to detected at all. If you
want to construct an argument that is more compelling, you will
need to argue that because the basis functions for the X3 sensor
are further from orthogonal than a "pure" Bayer pattern set, you
may have trouble spanning a wide range of saturations for some
colors given the limited dynamic range of the sensor.

I could be persuaded by such an argument, but this is a relatively
subtle argument that will depend upon a lot of data we just don't
have. Until then, I don't think it serves anybody to spread FUD.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
--
Karl
 
Mr Parr your crusade to be Foveon's "great defender" is rather tiring.
Right... It's my sincere hope that you'll get tired of being corrected and start telling it straight.
Maybe in all your great wisdom you can explain why the SD9 is only
rated to ISO400? Maybe it is a that Foveon just wanted to be nice
to the Bayer guys?
It's quite ironic that you would mention this since I've been offering detailed explanations which are more or less consistent with your theories regarding well sizes and QE. You tend to ignore these, even repeating the same answers in a thread when I've already provided them. There's a real lack of candor in your comments and your implication that I am unable to explain the reduced sensitivity or predisposed to rationalize it away.
It would be nice if you would stop being
Foveon's lap dog and realize that there are Pros and Cons to the
approach.
I've discussed the pros and cons quite clearly and from a disinterested position. How much stock do you have in TI? How many of your friends work for TI making DSPs? A while back I provided you with a bunch of links describing the use of programmable TI DSPs for Bayer interpolation:

http://www.dpreview.com/forums/read.asp?forum=1000&message=3440066

You read this message and responded to it, but later said, "I don't expect to see programmable DSP's in Cameras until they get wireless connect.":

http://www.dpreview.com/forums/read.asp?forum=1027&message=3481765

when in fact you already knew the opposite to be the case. So, who's spreading misinformation here?
As of today, the X3 is more a technical curiousity than
a breakthough that the market has to wait for.
The market will decide this, not you or I. I've told you on many occasions that I own a D60 and don't expect to sell it to get an SD9. I've also indicated that I will wait for Phil's review to make any final judgements.

The only one who seems to think he's qualified to judge the practical impact of this technology based upon very little information is you.
Some overlap of the color space is certainly not a killer, but the
overap is pretty large. If what you said is TRUE, then the bayer
people would use "very sloppy" filters rather than trying for
relatively sharp color filters.
This is a non sequitur. Some overlap is necessary to be able to resolve colors such as yellow, which must be detected by both the red and green sensors. Too much overlap will create challenges for the reasons I have described. The problem is that your portrayal of the issues involved was really quite incomplete and misleading.
Yes it is tough to know for sure much because so far, Foveon has
given out so much more hype than information. Thus those of us
that have been in the Industry had our "hype detectors" go off
pretty strong. If you want to believe everthing Foveon tells you,
that is your problem and your ingnorance of how high tech companies
overstate the benefits and understate or hide the drawbacks.
I'm very clear on the distinction between the mathematical/physical principles involved and the challenges of producing a working device. If you think otherwise, then you haven't been understanding my messages.

You keep talking about hype but this is a hollow complaint because people have seen the images this technology can produce. The folks here aren't excited about press kits; they're excited about real images taken with a real camera. They've also gotten a glimpse at the limitations from Phil's noisy ISO 400 shots, but they still find the technology intriguing.

The problem is not hype. The problem is not ignorance about the tradeoffs and challenges. The problem is in your mind.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
First, you should check out the following site on interpolation:

http://ise.stanford.edu/class/psych221/99/tingchen/main.htm
Don't need to. I have actually implemented every interpolation method on that site and tested them on real raw images. There are no Bayer interpolations that can compensate and be very effective at predicting data when saturated reds or blues are being sampled. The better algorithms (like the gradient method) all depend on making a correlation between the RGB channels. The problem with doing that is that the frequency response of your gradients depends on the primaries, so you are basically back to square one.

Take a look at:

http://ise.stanford.edu/class/psych221/99/tingchen/algodep/vargra.html

Let's look at the top figure and assume that we have an area of saturated blue that we are trying to resolve, where the green and red channels change very slightly compared to a large change in blue. We might be looking at where the blue and black sections of a sneaker meet for example. In this area you should have a sharp, well defined edge where the black meets bright blue. When going across the edge from black to blue, the blue channel might change from 7 to 150, the red channel might change from 3 to 9, and the green channel might change from 5 to 12. These numbers are actually quite realistic as I've seen real samples in that area.

Using this example, take a look at R13 and consider how we would interpolate the value for blue at this pixel. First, you pick the gradients that have the least variance. You then use those to calculate the value of blue at R13 by taking the red value and adding the difference between blue and red in the gradients being considered. Here's the problem. Your gradients will be largely dictated by only the blue channel, so you will be selecting gradients that are heavily weighted to only following the blue channel. This is due to the fact that in the window being considered, the blue variance is 143 (150-7) while the red variance is only 6 (9-3) and the green variance is 7 (12-5). Your algorithm is therefore heavily biased to only following blue information which is sampled only at every fourth pixel.

All viable interpolation algorithms weight the influence of the three channels such that the influence is based on the channel variance. The gradient method is the best (or some variant of it), however, the other algorithms have similar problems. In the example above, none of the algorithms are going to try to "drive" the blue channel by a straight correlation when looking at blue and green pixels that hardly change relative to red. To do so would introduce specular highlight errors, banding, and an immense amount of image noise from trying to drive a bright channel with values based on very low voltages that are close to the noise level of the sensor. Trust me on that one: I've tried it. :-)
But the issue here is not whether the green and blue pixels
can all be black/zero or not, it is whether or not the green/blue
pixels are picking up enough information to contribute to the
interpolation! In many cases, in real images, they do not.
This is true to a certain extent, but even the lack of information
is information. The fact that no interpolated green/blue value
exists for a red RGB triplet contributes directly to the purity of
that color.
I agree, but in this case, the green and blue pixels cannot contribute to resolution: only color, and you'll see a lot of jaggies around sharp edges.
... if your red channel is 90% of maximum and
your green and blue pixels are 10%, the contribution to image
resolution by the green and blue pixels is nearly useless to a
demosaicing algorithm.
That's incorrect. A shift in color can convey nearly as much
information regarding image detail as a shift in intensity. A shift
in a green pixel of just 10% could potentially shift its adjacent
red value from 255,0,0 to 255,25,0. And you will see that variation
in color tone as detail.
We're talking about edge detail where 255,0,0 meets 0,0,0 (or values close to those). A sharp red/black edge isn't going to benefit much from a 10% shift in any RGB channel, or at least, it's certainly not going to look like the sharp edge that it really is.

--
Mike
http://www.ddisoftware.com
 
First, I agree that the patent is 4 years old and that Foveon may
have some "tricks" to get better separation. The Data for Bayers
is published, but like so much of Foveon's "breakthough," so much
is kept secret (most people with real solid breakthoughs give a bit
more data and less hype). Still, I don't think the physics of
silicon has changed in 4 years. If Foveon requires special doping
of the silicon to get better separation, they then move farther
away from "standard CMOS" and thus become more expensive.

The POINT was that it is still MOST LIKELY that the X3 cannot
separate colors as well as a given Bayer sight for a given color.
It is a FACT that the X3 currently has problems at high ISO, and it
is LIKELY that this is ONE of the problems that they have.
I have no basis to argue with that, and it may very well be true. We just don't know to what extent. It still doesn't change my observation that resolution varies greatly on a Bayer sensor depending on the color being sampled, and that the X3 (while it may have the same problem) will have the problem to a much lesser degree. That's what this thread is all about, and why I think you cannot base resolution on just B/W res targets any more, when comparing Bayer to X3 images.

--
Mike
http://www.ddisoftware.com
 

Keyboard shortcuts

Back
Top