On Sensor Phase Detect AF Implementation

Interceptor121

Forum Pro
Messages
12,604
Solutions
8
Reaction score
9,603
I have two different brands of mirrorless camera with PDAF

The first set Sony always focus with the aperture number dialled in except when hitting f/11 where you can tell the camera not to stop down further

The second set panasonic focuses like a DSLR with the lens wide open unless you tell the camera to do live view and then it starts using the aparture number dialled in

Considering that the sensor is being scanned at a fixed rate and those are not pixels used for exposure I see benefits and disbenefits of both systems

In bright scenes it makes senses to close down the aperture to avoid overflow of light however in low light it seems more appropriate to focus wide open

Another parameter is the ISO value dialled in which does not seem to have an effect to the camera focus as if the camera was increasing gain in order to focus as much as possible and then applying the gain for the exposure

Finally the sensor readout can drop in low light but does not go faster in bright light the shutter speed is simulated only in live view making it not clear what it does when live view is not on but clearly cannot drop below the readout rate which again in low light can drop as well

I am trying to determine the best conditions to activate or not live view with my cameras

I am getting the impression that in bright scenes live view makes sense to avoid highlights that could create focus problems while in dark scenes not only I would avoid live view but I would also let the sensor readout droo in order to focus

I am sure Nikon and Canon may have even different implementations so this cannot be generalised but I was wondering if I am on the right track and how I can build a test procedure to validate the assumptions
 
Yes, and it's also been mentioned before in other venues that OSPDAF systems don't enjoy the separation of the image pairs that DSLR PDAF provides. Thus, not only are the image pairs experiencing strong changes in sharpness as focus is changed, but also lie almost on top of each other. The result is poor focus discrimination near optimal focus. DSLR PDAF, by contrast, always has image pairs that are relatively focused and non-interfering.
As i and 'kenw' have written, the only principle difference between PDAF in SLR and MILC is the position of the sensor. A (D)SLR has the sensor on the bottom of the mirror housing and a MILC (ab)uses the sensor for PDAF. A minor difference could be the resolution of the different sensors.
The principle employed is identical. However, there are devils in the details of implementation. Many here, including myself, refer to Marianne Oelund's rigorous teardown, analysis, and optical bench emulation of the Nikon D300 AF subsystem to understand the DSLR PDAF system's intricacies. Her report, originally published here, is now hosted on photonstophotos.com. Unfortunately, there is not a similar lodestone for OSPDAF systems. It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF, and most of the stuff you can get on OSPDAF by googling is not particularly useful unless you spend hours behind a technical journal paywall or the patent office's archives.
The essential optical difference between PDAF and OSPDAF is the lack of preconditioning discrimination optics in OSPDAF. Crudely speaking, this means that OSPDAF works with a "fuzzier" optical signal than PDAF which makes it harder to determine precisely the exact displacement of the image pairs, and also due to the lack of preconditioning optics the quality of the optical signal varies with imaging lens aperture. Optical crosstalk is also at play, since all that the masking and microlens shaping is doing is allowing pixels to look "mostly left" and "mostly right". So some of the right-hand image is getting in to the left-looking pixel, and vice versa.
OSPDAF's solution to this is - in my understanding - more data. PDAF, because of its preconditioning optics and rigorous image pair isolation, needed remarkably little data to deliver excellent results. Since it was invented in the early days of computing, it HAD to.
OSPDAF grabs the entire frame and selects subareas. The data may be fuzzier, but there's more of it, and given a good statistical denoising algorithm the larger the sample in a given area the more likely you are to find what you're looking for. It's been a long road from the zonal-focus schemes of the early point and shoots to the extremely precise results of modern ILC OSPDAF systems, but OSPDAF systems now usually equal or outperform PDAF...usually.
To the OP: be careful speculating. PDAF is simple in principle and OSPDAF uses the same principle - I agree with jiberlin and kenw here - but has a lot of subtleties that are difficult to tease out using the practical concepts that we as photographers like to use. Kjersting may state things a bit differently than me, but I agree with him as well.
I'll be the first to admit that I'm also engaging in some speculation, but to me, beginning from the same optical principle, PDAF is predominantly optical with a little bit of data statistics (mostly correlation) and thus limited in its configurability; OSPDAF is primarily statistical with minimal optics and a vast ability to be configured.
 
Last edited:
Yes, and it's also been mentioned before in other venues that OSPDAF systems don't enjoy the separation of the image pairs that DSLR PDAF provides. Thus, not only are the image pairs experiencing strong changes in sharpness as focus is changed, but also lie almost on top of each other. The result is poor focus discrimination near optimal focus. DSLR PDAF, by contrast, always has image pairs that are relatively focused and non-interfering.
As i and 'kenw' have written, the only principle difference between PDAF in SLR and MILC is the position of the sensor. A (D)SLR has the sensor on the bottom of the mirror housing and a MILC (ab)uses the sensor for PDAF. A minor difference could be the resolution of the different sensors.
The principle employed is identical. However, there are devils in the details of implementation. Many here, including myself, refer to Marianne Oelund's rigorous teardown, analysis, and optical bench emulation of the Nikon D300 AF subsystem to understand the DSLR PDAF system's intricacies. Her report, originally published here, is now hosted on photonstophotos.com. Unfortunately, there is not a similar lodestone for OSPDAF systems. It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF, and most of the stuff you can get on OSPDAF by googling is not particularly useful unless you spend hours behind a technical journal paywall or the patent office's archives.
The essential optical difference between PDAF and OSPDAF is the lack of preconditioning discrimination optics in OSPDAF. Crudely speaking, this means that OSPDAF works with a "fuzzier" optical signal than PDAF which makes it harder to determine precisely the exact displacement of the image pairs, and also due to the lack of preconditioning optics the quality of the optical signal varies with imaging lens aperture. Optical crosstalk is also at play, since all that the masking and microlens shaping is doing is allowing pixels to look "mostly left" and "mostly right". So some of the right-hand image is getting in to the left-looking pixel, and vice versa.
OSPDAF's solution to this is - in my understanding - more data. PDAF, because of its preconditioning optics and rigorous image pair isolation, needed remarkably little data to deliver excellent results. Since it was invented in the early days of computing, it HAD to.
OSPDAF grabs the entire frame and selects subareas. The data may be fuzzier, but there's more of it, and given a good statistical denoising algorithm the larger the sample in a given area the more likely you are to find what you're looking for. It's been a long road from the zonal-focus schemes of the early point and shoots to the extremely precise results of modern ILC OSPDAF systems, but OSPDAF systems now usually equal or outperform PDAF...usually.
To the OP: be careful speculating. PDAF is simple in principle and OSPDAF uses the same principle - I agree with jiberlin and kenw here - but has a lot of subtleties that are difficult to tease out using the practical concepts that we as photographers like to use. Kjersting may state things a bit differently than me, but I agree with him as well.
I'll be the first to admit that I'm also engaging in some speculation, but to me, beginning from the same optical principle, PDAF is predominantly optical with a little bit of data statistics (mostly correlation) and thus limited in its configurability; OSPDAF is primarily statistical with minimal optics and a vast ability to be configured.
I think I am ok with the ideas

if the camera is working with a formed image to focus it will need to be somehow already almost in focus I guess

in practical terms I believe the challenge is depth of field you want it to be sufficient to be closer to be in focus but then of course if the image is too dark you may need to Open the lens or fail

Panasonic implementation in micro four thirds focussed wide open however i only have 2.8 lens I have seen with sony that with very fast lens it can go all the way down to F/2 If starving for light

My concern is that there are situations where by wide open there may be too much light considering the camera is using a fixed exposure time for the live view

anyway in practical terms I have also observed that putting live view on doesn’t mean changing anything the camera still opens the lens to focus
 
Yes, and it's also been mentioned before in other venues that OSPDAF systems don't enjoy the separation of the image pairs that DSLR PDAF provides. Thus, not only are the image pairs experiencing strong changes in sharpness as focus is changed, but also lie almost on top of each other. The result is poor focus discrimination near optimal focus. DSLR PDAF, by contrast, always has image pairs that are relatively focused and non-interfering.
As i and 'kenw' have written, the only principle difference between PDAF in SLR and MILC is the position of the sensor. A (D)SLR has the sensor on the bottom of the mirror housing and a MILC (ab)uses the sensor for PDAF. A minor difference could be the resolution of the different sensors.
The principle employed is identical. However, there are devils in the details of implementation. Many here, including myself, refer to Marianne Oelund's rigorous teardown, analysis, and optical bench emulation of the Nikon D300 AF subsystem to understand the DSLR PDAF system's intricacies. Her report, originally published here, is now hosted on photonstophotos.com. Unfortunately, there is not a similar lodestone for OSPDAF systems. It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF, and most of the stuff you can get on OSPDAF by googling is not particularly useful unless you spend hours behind a technical journal paywall or the patent office's archives.
The essential optical difference between PDAF and OSPDAF is the lack of preconditioning discrimination optics in OSPDAF. Crudely speaking, this means that OSPDAF works with a "fuzzier" optical signal than PDAF which makes it harder to determine precisely the exact displacement of the image pairs, and also due to the lack of preconditioning optics the quality of the optical signal varies with imaging lens aperture. Optical crosstalk is also at play, since all that the masking and microlens shaping is doing is allowing pixels to look "mostly left" and "mostly right". So some of the right-hand image is getting in to the left-looking pixel, and vice versa.
OSPDAF's solution to this is - in my understanding - more data. PDAF, because of its preconditioning optics and rigorous image pair isolation, needed remarkably little data to deliver excellent results. Since it was invented in the early days of computing, it HAD to.
OSPDAF grabs the entire frame and selects subareas. The data may be fuzzier, but there's more of it, and given a good statistical denoising algorithm the larger the sample in a given area the more likely you are to find what you're looking for. It's been a long road from the zonal-focus schemes of the early point and shoots to the extremely precise results of modern ILC OSPDAF systems, but OSPDAF systems now usually equal or outperform PDAF...usually.
To the OP: be careful speculating. PDAF is simple in principle and OSPDAF uses the same principle - I agree with jiberlin and kenw here - but has a lot of subtleties that are difficult to tease out using the practical concepts that we as photographers like to use. Kjersting may state things a bit differently than me, but I agree with him as well.
I'll be the first to admit that I'm also engaging in some speculation, but to me, beginning from the same optical principle, PDAF is predominantly optical with a little bit of data statistics (mostly correlation) and thus limited in its configurability; OSPDAF is primarily statistical with minimal optics and a vast ability to be configured.
This is an excellent and understandable summary of a lot of papers and patents published over the years. Yes, there are a lot of details we don’t know for sure about specific cameras, but the above summary is broadly applicable across most all OSPDAF as far as I am aware (though admittedly there is a lot I’m not aware of).
 
My concern is that there are situations where by wide open there may be too much light considering the camera is using a fixed exposure time for the live view
I am unaware of any MILC that has a fixed exposure time for the live view. This seems to be another misunderstanding that has lead you to an invalid hypothesis.

”Too much light” is not an issue for any live view system. Even 30 seconds of thought about the use of adapted manual lenses would make that obvious to anyone even if they didn’t take the time to understand how live view works.
 
The Olympus OM1 (and MK2) gets confused with quad bayer, but they are a regular bayer pattern, but it just happens to be that each photosite (they have 80 million in total, but final output is 20MP) is cut into a quadrant with one micro lens covering that quadrant.

This is beyond my technical comprehension, but despite the smaller photosites, it seems Olympus was able to provide an upgrade over their previous design which was basically how everyone else was doing it, but also including pixels masked off in the horizontal direction to get a cross pattern for phase detection duties.

There is also the A7S III which uses the quad bayer design, where each photosite has their own micro lens. The design has two different coloured greens, one darker and one lighter. The lighter one is co-opted for phase detection duties.

---
I like cameras, they're fun.
 
Last edited:
The Olympus OM1 (and MK2) gets confused with quad bayer, but they are a regular bayer pattern, but it just happens to be that each photosite (they have 80 million in total, but final output is 20MP) is cut into a quadrant with one micro lens covering that quadrant.

This is beyond my technical comprehension, but despite the smaller photosites, it seems Olympus was able to provide an upgrade over their previous design which was basically how everyone else was doing it, but also including pixels masked off in the horizontal direction to get a cross pattern for phase detection duties.

There is also the A7S III which uses the quad bayer design, where each photosite has their own micro lens. The design has two different coloured greens, one darker and one lighter. The lighter one is co-opted for phase detection duties.
There's a whole bunch of variations on the quad configuration. You can mask to get directionality and you can shape the microlens to do so as well. Shaping the microlens eliminates the sensitivity loss of masking, and by shaping the lenses to look up/down/left/right you can get X-Y phase sensing. Of course, there's the issue of relative data readout rates between columns and rows of a column-parallel sensor (hence the use of multi-row data readout and stacked sensors), but it's pretty impressive. Sony goes even farther with an "octa-PD" arrangement in its largest cellphone sensors.
 
Yes, and it's also been mentioned before in other venues that OSPDAF systems don't enjoy the separation of the image pairs that DSLR PDAF provides. Thus, not only are the image pairs experiencing strong changes in sharpness as focus is changed, but also lie almost on top of each other. The result is poor focus discrimination near optimal focus. DSLR PDAF, by contrast, always has image pairs that are relatively focused and non-interfering.
As i and 'kenw' have written, the only principle difference between PDAF in SLR and MILC is the position of the sensor. A (D)SLR has the sensor on the bottom of the mirror housing and a MILC (ab)uses the sensor for PDAF. A minor difference could be the resolution of the different sensors.
The principle employed is identical. However, there are devils in the details of implementation. Many here, including myself, refer to Marianne Oelund's rigorous teardown, analysis, and optical bench emulation of the Nikon D300 AF subsystem to understand the DSLR PDAF system's intricacies. Her report, originally published here, is now hosted on photonstophotos.com. Unfortunately, there is not a similar lodestone for OSPDAF systems. It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF, and most of the stuff you can get on OSPDAF by googling is not particularly useful unless you spend hours behind a technical journal paywall or the patent office's archives.
The essential optical difference between PDAF and OSPDAF is the lack of preconditioning discrimination optics in OSPDAF. Crudely speaking, this means that OSPDAF works with a "fuzzier" optical signal than PDAF which makes it harder to determine precisely the exact displacement of the image pairs, and also due to the lack of preconditioning optics the quality of the optical signal varies with imaging lens aperture. Optical crosstalk is also at play, since all that the masking and microlens shaping is doing is allowing pixels to look "mostly left" and "mostly right". So some of the right-hand image is getting in to the left-looking pixel, and vice versa.
OSPDAF's solution to this is - in my understanding - more data. PDAF, because of its preconditioning optics and rigorous image pair isolation, needed remarkably little data to deliver excellent results. Since it was invented in the early days of computing, it HAD to.
OSPDAF grabs the entire frame and selects subareas. The data may be fuzzier, but there's more of it, and given a good statistical denoising algorithm the larger the sample in a given area the more likely you are to find what you're looking for. It's been a long road from the zonal-focus schemes of the early point and shoots to the extremely precise results of modern ILC OSPDAF systems, but OSPDAF systems now usually equal or outperform PDAF...usually.
To the OP: be careful speculating. PDAF is simple in principle and OSPDAF uses the same principle - I agree with jiberlin and kenw here - but has a lot of subtleties that are difficult to tease out using the practical concepts that we as photographers like to use. Kjersting may state things a bit differently than me, but I agree with him as well.
I'll be the first to admit that I'm also engaging in some speculation, but to me, beginning from the same optical principle, PDAF is predominantly optical with a little bit of data statistics (mostly correlation) and thus limited in its configurability; OSPDAF is primarily statistical with minimal optics and a vast ability to be configured.
I think I am ok with the ideas

if the camera is working with a formed image to focus it will need to be somehow already almost in focus I guess
This is the essence of the miracle of high-numerical-aperture optical systems as employed in DSLR PDAF systems: DOF stays very high (f/24 or more), so over most of the focusing range of the system, the image pairs remain sharp.
OSPDAF, by contrast, can only work directly with the primary imaging lens aperture (the microlenses over the focusing pixels reduce the effective aperture a bit) so it has to work harder with lower contrast data. The brighter the light the better it works. In AF-C you're constantly adjusting focus, and both systems use this focus/contrast history to improve their calculations.
in practical terms I believe the challenge is depth of field you want it to be sufficient to be closer to be in focus but then of course if the image is too dark you may need to Open the lens or fail

Panasonic implementation in micro four thirds focussed wide open however i only have 2.8 lens I have seen with sony that with very fast lens it can go all the way down to F/2 If starving for light

My concern is that there are situations where by wide open there may be too much light considering the camera is using a fixed exposure time for the live view
I do not think this is an issue. The camera's exposure system knows where "too bright" lies and doesn't go there. There's also a difference between the gain used for a readably bright image on the EVF and the conversion gain that produces the data.
anyway in practical terms I have also observed that putting live view on doesn’t mean changing anything the camera still opens the lens to focus
The reason for focusing wide open or stopping down is more practical. By focusing wide open you admit the maximum amount of light to the sensor, and OSPDAF systems need a lot of light for precision. In the early days OSPDAF systems would switch over to CDAF, or employ a hybridization of PDAF and CDAF techniques, at surprisingly high scene brightnesses. This has been largely resolved in modern mirrorless cameras.
Sony has a configurable aperture behavior on its ILCs - you can either focus at the designated aperture, or stop at some maximum that guarantees the level of performance Sony wants. Light levels being satisfactory, you want to focus at the designated aperture to avoid shifts that can result from the inevitable imperfections in the lens. You can get general field curvature shifts, and you can get shifts vs light frequency as well. But sometimes you just have to go with getting the most light onto the sensor.
 
Last edited:
It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF
I don't get what you mean by teardown here (disassembling sensor would be useless anyway), but it is definitely possible for amateurs to measure angular response and crosstalk for cameras where getting values from AF pixels possible. This includes, AFAIK, Canon cameras with Magic Lantern and Sigma Quattro H. Canon cameras with DualRAW feature probably can make photos with close how it gets operated for AF.
 
It is impossible for an end-user to conduct a similar teardown of the mirrorless version of PDAF
I don't get what you mean by teardown here (disassembling sensor would be useless anyway), but it is definitely possible for amateurs to measure angular response and crosstalk for cameras where getting values from AF pixels possible. This includes, AFAIK, Canon cameras with Magic Lantern and Sigma Quattro H. Canon cameras with DualRAW feature probably can make photos with close how it gets operated for AF.
I meant in the same sense and with the same intuitive satisfaction that Oelund did with her teardown of the D300 AF subsystem. Oelund could nondestructively disassemble and measure the components and then build larger-scale optical bench models that replicated the behavior of the D300 subsystem optics. Of course, much of a DSLR PDAF system's performance comes from those optics so doing an optical bench replication is valuable. Most of the work in an OSPDAF system is done in software after image capture through very simple (but carefully designed) microlensing and/or masking, so physical teardown is not only destructive but not that useful.

There's also much work being done in software with the DSLR PDAF system, but its optics are doing a lot of the data conditioning work.
 
One doesn't need to teardown OSPDAF sensor. It is possible to measure angular response by illuminating sensor through aperture or slit simulating parts of imaging lens aperture and recording sensor response.
E.g. this article about sensor gives it in figure 6b.
https://www.semanticscholar.org/pap...fa85ccbf5ee2a997ece81f40ca74992848b9/figure/3
This article (Korean) is about optimizing off-center AF pixels
https://ki-it.com/xml/34784/34784.pdf
It is possible for a skilled amateur to measure it for cameras discussed here, as long as camera permits getting values from AF pixels.

Ah, I get it, you get dissatisfied by "angular response" so much that in your reply you mention it zero times.
 
Last edited:
One doesn't need to teardown OSPDAF sensor. It is possible to measure angular response by illuminating sensor through aperture or slit simulating parts of imaging lens aperture and recording sensor response.
E.g. this article about sensor gives it in figure 6b.
https://www.semanticscholar.org/pap...fa85ccbf5ee2a997ece81f40ca74992848b9/figure/3
This article (Korean) is about optimizing off-center AF pixels
https://ki-it.com/xml/34784/34784.pdf
It is possible for a skilled amateur to measure it for cameras discussed here, as long as camera permits getting values from AF pixels.

Ah, I get it, you get dissatisfied by "angular response" so much that in your reply you mention it zero times.
Nope, you completely miss my point. The way that you analyze the electrooptical part of an OSPDAF system is completely different than what Oelund was able to do with a DSLR PDAF module. The larger physical scale and the greater dependence on optical elements plays a part in this. Yes, a skilled amateur can measure angular response of an OSPDAF system at the pixel level simply by reading out the pixel-level converted data, but more of the OSPDAF system's functionality happens in software. By contrast, much more of the DSLR PDAF system's functionality is optically realized and emulatable on an optical bench.
 
The way that you analyze the electrooptical part of an OSPDAF system is completely different than what Oelund was able to do with a DSLR PDAF module.
There is no easy way to get values from AF sensor of Nikon DSLR. If it were, one could have given known inputs and measure response and used same method for measuring SLR and mirrorless AF.
The larger physical scale and the greater dependence on optical elements plays a part in this. Yes, a skilled amateur can measure angular response of an OSPDAF system at the pixel level simply by reading out the pixel-level converted data, but more of the OSPDAF system's functionality happens in software.
The software method is the depth-from-defocus method, and it didn't manage to get widespread adoption.
OSPDAF can be complex. Old Sony patent for OSPDAF mentions three series of rows looking at three exit pupil locations. It's just dual pixel has all similar.

What features people are interested about when comparing AF systems? Will it work at f/13? Does it benefit from wide apertures? Which SNR it has? How is it good for far defocus situation? To answer these, you need pixel angular response.

There is no reason why OSPDAF with less calculations could be done, just computational power got cheaper quicker than fabrication of OSPDAF.

By contrast, much more of the DSLR PDAF system's functionality is optically realized and emulatable on an optical bench.
Which is a weird way of saying "we don't want to measure mirrorless OPSDAF angular response on optical bench and even if someone did, we are not interested in it".
 
The way that you analyze the electrooptical part of an OSPDAF system is completely different than what Oelund was able to do with a DSLR PDAF module.
There is no easy way to get values from AF sensor of Nikon DSLR. If it were, one could have given known inputs and measure response and used same method for measuring SLR and mirrorless AF.
The larger physical scale and the greater dependence on optical elements plays a part in this. Yes, a skilled amateur can measure angular response of an OSPDAF system at the pixel level simply by reading out the pixel-level converted data, but more of the OSPDAF system's functionality happens in software.
The software method is the depth-from-defocus method, and it didn't manage to get widespread adoption.
OSPDAF can be complex. Old Sony patent for OSPDAF mentions three series of rows looking at three exit pupil locations. It's just dual pixel has all similar.
What features people are interested about when comparing AF systems? Will it work at f/13? Does it benefit from wide apertures? Which SNR it has? How is it good for far defocus situation? To answer these, you need pixel angular response.
There is no reason why OSPDAF with less calculations could be done, just computational power got cheaper quicker than fabrication of OSPDAF.
By contrast, much more of the DSLR PDAF system's functionality is optically realized and emulatable on an optical bench.
Which is a weird way of saying "we don't want to measure mirrorless OPSDAF angular response on optical bench and even if someone did, we are not interested in it".
Enginel, you're again missing my point, which is about the sheer beauty of the optical design of the DSLR PDAF system, and about the elegance of one particular engineer's methods and overall analysis of that system. Invented in a time when digital photography was hardly even a glimmer in engineer's eyes, it contains a remarkable amount of functionality. It had to, because there was no other choice. That you can actually SEE and EMULATE how that functionality arises (at least in the hands of a skilled analyst with an ability to teach - clearly) is beautiful as well.

The DFD software technique is not the method I'm talking about here, That was Panasonic's inadequate cleverness which depended on lens defocus behavioral modeling and still looked a lot like CDAF in that it required some iteration.

Without the optical preconditioning of the DSLR PDAF system, OSPDAF must use software techniques to isolate regions of interest within the frame. These are still PDAF techniques, just implemented post-capture.

I'm looking at the engineering of a segment of the PDAF system from an artistic viewpoint, in much the same way one can look at the mechanicals of a steam engine and both perceive instantly how they function, how clever the engineers were who designed it, and how beautiful the result. Today's electric locomotives are clever as well, but the details of their operation are buried in their control computers. Their barely visible mechanicals have been reduced to a basic motor, perhaps a few gears, and a power transfer unit. Elegant and clever in its own right, but less...organic, if you will.

You're arguing your point from the perspective of an engineer assigned to do a formal teardown and analysis of a complete product who sees flaws in the proposed methods of a team member. I'm posting from the perspective of an engineer who is seeing beautiful physical artistry in a particular implementation of a particular function of that product. Modern engineering has given us amazing power in moving the majority of the functionality of our tools into software through digitization. That in itself is beauty, but of a different and less intuitively accessible sort - and I speak as an engineer who is not unfamiliar with coding.

Let's leave this discussion at that.
 

Keyboard shortcuts

Back
Top