If you make two photographs of a subject with the same lens, aperture, and shutter speed on two cameras that are identical except for sensor resolution... and then print the two images at the same size... there will be precisely the same amount of blur.
Your explanation is sound in most respects and is helpful. However, this paragraph challenges my long help view that for handheld shooting, as resolution increases technique must also improve otherwise any blur caused by poor standing, holding, breathing or shutter rolling will be more obvious with a high resolution sensor than a lower one.
I would value you view in this aspect.
Let's step back and think about this a moment. What causes motion blur to be present in a photo? It's caused by movement in the frame during the shutter actuation. Given enough movement in the frame and a long enough exposure time, the movement will be visible as blur in the resulting photo.
Suppose two cameras of a given format are setup, side-by-side. They use the same model lens at the same focal length and f-stop. They have identical framing and are focused on the same point in the frame. They use the same shutter speed and ISO when making photos of something moving through the frame. The only difference between the cameras is that one is built around a 26MP sensor and the other is built around a 40MP sensor.
Does changing the pixel density change the area of the frame, sensor or of the resulting photo that contains blur? No.
Based on the exposure time and the rate of movement through the frame, we could calculate what percentage of the total width of the frame that movement would have covered during the shutter actuation. For the sake of discussion, let's say the movement covered 5% of the width of the frame.
It's the amount of movement relative to the total size of the frame that determines whether or not the movement is visible to the eye in the photo. If there's movement in the frame that only covers a tiny fraction of the scene, we won't see that movement. It won't be visibly burry.
Suppose we used 600mm lenses with 2x TCs attached to make the photos with the two cameras. Imagine we had a third camera in the mix. This third camera would be built around the same format sensor, using the same shutter speed, but using a 20mm focal length lens.
The angle of view would be 60x wider with the 20mm lens than the angles of view captured with the 1200mm focal lengths on the other two cameras. The shutter speed used by the camera fitted with the 20mm lens is the same so it captures the same amount of movement. However, due to the substantial increase in the angle of view and also to the moving subject appearing 1/60th the size in that wide-angle photo, the movement captured would cover less than 1/10th of 1% of the width of frame. The same movement wouldn't be visible in the wide angle photo. the moving subject would not look blurred.
The differences in appearance are due to the differences in angle of view, magnification of the subject and the area within the frame covered by the movement. Pixel count doesn't play a role. The blur would cover more pixels in the photo made with the higher megapixel sensor but, when comparing two prints of the same size made with the first two cameras, the motion blur would look the same. It wouldn't matter that the blur covers more pixels in one photo versus another. The blur would have the same scale and cover the same area of the whole frame.
We could make the blur in either photo more prominent to the eye by zooming in and magnifying the images. But as long as we're viewing both photos at the same scale, the amount of motion blur will look the same.
--
Bill Ferris Photography
Flagstaff, AZ
http://www.billferris.photoshelter.com