After reading some of the posts in this thread, I feel inclined to describe the reason why it is impossible to achieve the same results by digitally zooming and optical zoom. If you are already familiar with these reasons, just skip this part of this message.
---
While it IS true that because of the resolution of the G1 and other cameras, you have more detail to crop and you can blow that detail up, it is NOT possible to recover information not captured by the sensor of the camera with software.
Many sci-fi television shows have the scene (and we've all seen it), where they are standing around a TV screen and you can't tell jack about what's on the screen, etc. Then the technician plays with a computer and BOOM. Sharp, in focus, and color picture fades in on the TV. How did that magician pull it off? They took a good image, and made it really bad... just in reverse, so that you see a bad image made really good.
I will give an easy explanation. Say I use an 8x teleconverter (I have such a device: the Kenko 8x32 monocular. It's excellent... And I take a picture through it, at full tele (my camera, the PS S20 has a 32-64mm range, so say 512mm, and I do have to focus the device through the LCD viewfinder.) The optics transforms the image 8x closer, and then the 3.14 million sensors record that data. Now, pretend that zoom is a 9x zoom, because 9 = 3^2. For every pixel of data captured, at 1x zoom, you get 9 at 9x zoom. So, say that I just res an image up by 3x3 using some software technique.
When comparing this software zoomed image to the optical 9x zoom, you will notice that the optical zoom image actually measured the light at the individual locations whereas the software zoom created data in between the measured values. The data may be quite convincing or pleasing (especially if the algorithm was designed well), but the extra data will never substitute for real data.
---
I have two additional points to make, that those of you who already knew all of the above might find interesting/useful.
1)
You have a 3.3MP camera. Congratulations. When you take a picture a matrix of 2048x1536 Charge Coupled Devices (sensors) measure the light transmitted through a lense, color filter, IR-cut filter, and perhaps a "microlense". Each sensor is attached to a Analog to Digital Converter (either directly or through it's neighbor, but how it is connected is not as important as that it is connected). The ADCs measure the magnitude of the charge with some precision to some number of Binary digITs (BITs). Ok, now after all that has happened, we get to the important part: Your camera (your computer does this part if you shoot RAW, which is why it is better to shoot RAW) takes every four sensor values, throws away some data (applies exposure/reduces to 8 bits per color channel), combines the colors to form one pixel (you lose 4x resolution in this process, sacrificing color for resolution. Your 3.14MP B&W sensor array just became 0.7MP... Color), and goes through a process of throwing away more data (JPEG compression).
Now, how again, do we get the 3.14MP Color image out of the camera? You guessed it, that 4x throw away process is a little bit trickier than just throw away the data... You take the neighboring pixels, and merge them... This leaves us with another problem that us Computer Graphics people call Aliasing. I call it "color aliasing". Because you are aliasing the color with pixels. Digital Cameras like to fix this by filtering the image with some amount of gaussian blur.
This works well enough, but take a picture of a sharp contrasty edge and sharpen it a little and you will see the colors separate along the edge. You will see this under the microscope (zoom to actual pixels), however, and as you down-sample the image to, say, 0.7 MP (1024x768 is pretty much full screen at 72 DPI), you will note that these color aliasing problems go away. Note that you tend to get the worst aliasing in high contrast detail, and in less detailed areas of the image, the digital camera trick works well enough.
So the point that I want to make is that your 1024x768 is actually closer to the real fidelity of your camera. Note that JPEG compression also does some amount of data trashing, so it is better to use full res and JPEG or RAW than 1/4, if you can afford the storage (and storage is cheap these days).
Some people give Fuji cameras a hard time because their interpolation algorithm/sensor placement is a bit more agressive than the typical, but it seems to work well, only it produces more color aliasing artifacts (Phil calls it "hair moire").
(see middle of page
http://www.dpreview.com/reviews/fujis1pro/page13.asp )
A company called Foveon (
http://www.foveon.net/ ) uses 3 sensors and a prism instead of interpolation, to take a 4MP image with 12 Million Sensors.
2)
So the camera you bought has a little bit of tendency to separate colors by location of the sensor over the filters. Big deal. It's 3 million pixels (pixel is short for picture element). Well, at 1024x768, 72DPI, your image prints to something on the order of 14" x 11". Take your 3.3MP image of a good subject. The fact that you have the color aliasing problem is usually compensated for enough by the anti-aliasing filter in your camera (microlenses help too), the JPEG algorithm helps a little, as does the in camera processing, and then when you get to the technology for actually printing, a quality printer at something like 3600 Dots Per Inch (DPI) will actually put a pattern of ink blots on the paper so that when you look at the page, a pixel is defined by an area of the page. The ink blots themselves are too small to see, and the borders of the ink blots are not well defined enough to define a pixel. So a pixel blends with its neighbors. This (dithering) is a good form of anti-aliasing. and you will only be able to see it with magnification.
So go ahead, crop and print your pictures out at 70-200DPI (dithered on a much higher DPI printer, and preferably one with a 6-color process because it does make a difference) from a 3.3MP camera. You'll get your money's worth. Just take my word for it that a 600x400 crop from a 3.3MP image is not going to look nearly as good at 6"x4" as the 2000x1500 shot at 6"x4". Getting that little piece of glass or big piece of glass makes that much difference. I'm quite glad I bought an 8x zoom adapter. I'll post a few shots when I get back from Seattle with my new toy...
I hope this helps.
There is a built-in telephoto option on the G1 that doesn't seem
ever to be mentioned on this Forum. Personally I use it a lot.
I refer to the fact that the resolution is so good that except
where very large pictures are wanted then you can simply enlarge
the relevant portion when printing. You can go way beyond a 1.5
zoom before the image goes significantly soft.
The other advantage of doing this is that you can chose the exact
part of the picture that you want, something you can't do with full
frame telephoto. In fact I have sometimes got two different prints
out of one shot.
Of course you have to shoot at high resolution to enable this, but
with memory prices now very low and MicroDrives not too bad, why
waste a high quality camera on low res shots.
Chris Beney