That's very interesting, but I think your primary premise is incorrect. What you're doing is simply increasing pixel resolution to see the diffraction pattern.
For the confused, just look at the image on this page...specifically, the center image.
http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/raylei.html
What's being referred to is the slight dip between the two Airy disks. If you imagine pixels at the bottom of the image, and the points and dips representing light levels, you'd see that you need at least a pixel at the first point, one at the center dip, and one at the second point, to capture all the "detail".
The problem is that the center dip isn't image detail...only the points are. The center dp is manifested as the distance between the real detail points diminishes. It is an effect of the merging of two Airy disks, and capturing it does not represent a tangible increase in the amount of detail captured from the scene. So reducing pixel size to 1/2 the radius of the Airy doesn't get you anything.
But not to worry...I'm sure ejmartin will jump in and straighten us all out