Now in my early post-Lytro days I've been wondering how I could achieve the same effect with better results, not wanting to wait the years it might take for them to come up with a suitable next generation model. Lytro's only real selling point at this moment is it's ability to take "living pictures" (their parlance), which really just means a photo which is interactive in as much as you can focus on different items in the picture by tapping those items. The technology may be capable of quite a bit more, but that's all it currently delivers, and it delivers that with poor resolution, graininess, and restrictive requirements on lighting/action.
Living Pictures without Lytro
Why couldn't I achieve the exact same effect with far better results using my existing digital camera? I could, and did! Here's my "living picture" proof, using just an ordinary digital camera and a bit of human assistance.
No Lytro was required for this "living picture", just an ordinary digital camera (in this case a Sony NEX 5). Click on different objects in this scene to change focus depth!
...and now for Lytro's version...
Lytro's "living picture"! Click on different objects in this scene to change focus depth!
It doesn't take an expert in photo analysis to see that the non-Lytro picture looks much better: sharper, higher resolution, less grainy, and more realistic colors.
Faking Lytro Manually
Making Cameras Support Lytro-Like Effects
Many cameras these days have a feature called "exposure bracketing" which takes reacts to a shutter button press by taking a series of pictures at different exposure settings. You then review the photos later and determine which photo looked the best. Why then could you not have a "focus bracketing" feature which does the same thing but with focus? The simplest approach would be to take multiple photos as the camera automatically walks the focus back from infinity to macro taking as many photos as necessary to achieve a desirable effect, perhaps as few as 5 or 10 would be needed to achieve a reasonable effect; with the aperture appropriately set any given picture's depth of field is wide enough to allow significant ranges to be sharply focused. You would then need some mechanism for assigning clickable regions to the photo frames which happened to be in focus in that region. This would likely be a fairly trivial software problem to solve. All of this could be done with minimal camera intelligence, since it would just be varying focus distance in a fixed manner. A far better but slightly more complicated solution would be to do automatic, intelligent focus bracketing using the camera's built-in autofocus system. Many cameras (particularly in phones) allow you to select a region of the scene which should be in focus. It would thus be easy for the camera to break down the scene into a search grid it would scan looking for objects upon which to focus, taking a single picture at each focus depth (one photo per range, according to the depth of field). The camera could record which grid location contained an object at a certain focal distance away, this being usable later to relate clicks on an image to a particular focused frame. The advantage of this approach is that it might be far quicker and more efficient, needing only as many frames as a scene objects' depths require. A scene which had two people in the foreground hugging and a church in the background would probably require just two photos to make a "living picture", people around a table at a birthday table with a cake in the center and a bounce house in the background may require 5 or 6 photos to make a Lytro-like image.
Working with Motion
These approaches share one significant weakness which is the fact that using multiple sequentially taken images negates the ability to capture any action-oriented scenes. While modern digital cameras take rapid-fire photos, and those with exposure bracketing take three or so shots in a half second, that's certainly slow enough to make any significant movement within the scene noticeable when switching between frames. Still, as action is easily blurred with the first generation Lytro, this hardly seems any sort of argument against this alternate approach. An interesting solution to this problem and that of the inability to easily alter most existing camera's firmware, would be to use a replacement lens that split a single digital frame into multiple differently focused reproductions of the scene. Just as I use a Loreo 3-D lens to merge the images captured by two lenses onto one digital frame, so to could one produce a system that would use four or perhaps nine lenses to capture one instant onto one digital frame through small lenses focused at slightly different depths. Software could then easily split apart the single digital image into its component frames and do an easy focus analysis to determine what regions in each were in focus, with viewer software showing those as appropriate in response to touch. The limitations of this approach would be related to the increased lighting requirements (or decreased action) as a result of the smaller lens, the expected poorer quality of each lens (related to cost and it being more a novelty manufacture than embraced by lens giants), and the reduced resolution (as your effective megapixel image would be the original value divided by the number of lenses within the lens). Many stereo photographer setups coordinate two cameras to take their photos in concert, getting around all these issues, which you could also do to solve this problem, though I can imagine nothing more cumbersome.
The Lomo Oktomat as seen on Lomography has eight lenses which it uses to 2.5 seconds of motion across a single analog film frame it has divided into 8 regions. The same multi-lens approach but used simultaneously, with each lens focused slightly differently, could capture motion with Lytro-like aesthetics.
Focus Bracketing leads to Focus Stacking (Hyperfocus)
As I began to look into the practicality of these approaches I was pleased to discover that "focus bracketing" was being done manually, though with an intriguingly different goal. Rather than produce a living picture where you can focus on different elements in a scene, a process called focus stacking is used to take and then (using software) merge photos taken at different focus settings to produce a single image in which everything in the scene is in focus. The software involved analyses each photograph in the stack, each of the identical scene where only the focus is varied, and uses the regions of each photo which are in focus to produce the combined image in which everything is in focus. This approach produces very impressive results. The only limitations to this system is the requirement for a still scene, and the strong recommendation (if not requirement) that you use a tripod when taking your shots so as little varies as possible.
Series of images demonstrating a 6 image focus bracket of a Tachinid fly. First two images illustrate typical DOF of a single image at f/10 while the third image is the composite of 6 images. From Focus Stacking entry on Wikipedia.
The aesthetic of a photo in which most things are in focus is quiet different from one in which only those things you select are in focus, but from a technical standpoint they are quite similar, since both situations require one possess the data pertaining to every element in a scene being in focus. And a viewer could (and likely would) be given the option of viewing such a photo as he/she wished. Do they want to see the photo traditionally (one, non-interactive focus point), as a "living picture" where they can choose the object in focus, or as a photo in which everything is in focus?
Focus Bracketing Available Today on your Canon PowerShot
A little further research led me to find a rather intriguing ability to add automatic focus bracketing to an entire range of camera models, via the Canon Hack Development Kit (CHDK). CHDK allows you to safely, temporarily use a highly configurable and extensible alternative firmware in your Canon PowerShot. And users have used this to add focus bracketing for the purposes of focus stacking, and included detailed instructions on just how you can do it, too.
Coming Soon as an iPhone & Android Camera App
This integration of camera and software is a natural fit for an iPhone and Android app, where the app can control the capturing of the image and intelligent variation of focus and then do the simple post-processing to make the image click-focus-able. While I haven't seen such an app, I'm sure it'll come soon. I'd write it myself if I had the time.
Until the Future Comes
The point is that until Lytro demonstrates just what can be done with a light field camera, beyond merely creating a low-resolution "living picture", there's really no technical justification for placing the technology in people's hands when the same problem could be solved as effectively with traditional digital cameras. If demand existed (and perhaps it will come) for this image experience, no light fields need apply. Hopefully traditional digital camera companies will see the aesthetic value and include support in their firmware (for intelligent focus bracketing) and co-ordinated desktop software, app developers will launch good living picture capturing app cameras, and hopefully Lytro will demonstrate the additional merits of capturing and reproducing images from light fields.
This past Sunday after the Griffith Park Sidecar Rally I went with Francine to the nearby Travel Town, a train museum. Here are some 3-D photos I took. You'll need your red/cyan or parallel glasses.
I went to the Hollywood Forever Cemetery's Dia De Los Muertos (Day of the Dead) festival this year and brought along my 3-D lenses. I got about 150 photos in all, of many of the shrines, and people. I'm still figuring out the lenses, so about 30% of the photos didn't turn out well, presumably because I gauged the distance to the subject incorrectly and the parallax error is just too much to survive looking at. Some of the photos are pretty stunning in 3-D, it's almost like I'm transported back in time to the event. See what you think.