I recently bought a 1971 AMC 5 ton Army Surplus M820 Expansible Van. My goal is to transform it into a mobile office in the style of a Victorian gentleman's study.
See some photos of the truck and some first thoughts about how I might redo the interior of the expansible box below.
A few weeks back I stumbled across a forum thread on Holocaust Denial. I'd first read about the topic about 15 years ago when Usenet was the Internet's popular discussion forum. The years hadn't diminished my fascination with the notion that a militant minority fervently denied events occurred which the majority accepts as wholly factual. How could there be disagreement about such seemingly self-evident world events (with millions of people involved as witnesses, victims, perpetrators, etc.)? I'll write more on the topic at some point, perhaps, since I enjoy tracing everyone's ulterior motives and seeing how they influence what should be rational discussion. But for now I'll just mention the horror that greeted me when I logged back on to YouTube after having watched a series of videos on this topic. YouTube had apparently decided that I was a neo-Nazi and wanted to helpfully recommend like-minded channels I should subscribe to. Yikes.
I am pleased, I suppose, that YouTube doesn't play favorites with ideas and allows minority opinions and majority opinions to be heard and subscribed to, but I do wish to god there was a way I could firmly explain to YouTube that interest in a topic does not mean subscription to the idea at the heart of that topic. As there is none, I'll just have to announce for the benefit of any government, conspiratorial, zionist, etc. agency listening, there has been a terrible misunderstanding, and I am not a Nazi.
In a moment of anything but wisdom Microsoft has decided to leave earlier versions of the .Net (dotnet) Framework out of the Windows 8 install, including only 4 and 4.5. The reason they give for this peculiar decision is their desire to have a smaller OS install footprint. While less disk space lost to an OS install is a very noble goal, I can think of few things worse to leave out. Any user with Windows 8 who subsequently downloads and wants to use an application written against the 3.5 or earlier .Net runtimes will be forced to install (over the 'net) a reboot-required multi-hundred megabyte installer (supporting .Net 3.5, 3.0, and 2.0). Few things deter a potential user of your software more than a lengthy download and a forced reboot.
Adding insult to injury is that I am quite sure their smaller OS footprint goal is little more than an attempt to defend against one of Apple's (and others) easy anti-Windows attacks. Unless Microsoft has radically altered the way they handle Windows Updates, their Driver Store, WinSXS, temporary files, etc. then whatever savings they claim at initial install will be gone in a few months; the Windows directory of my 1.5 year old computer is a whopping 37 GB.
Why couldn't Microsoft leave out MS Paint, MS Write, Solitaire, audio recorder, Pinball, or hell, even Internet Explorer, and include the full range of .Net support? Now us poor developers are going to need to once again need to distribute versions of our software targeting multiple runtimes just to ensure most users don't have to do the absurd .Net installs.
I've been a huge fan of and user of AutoHotkey (AHK) for years, but I've got to admit (with a sense of betrayal) that I'm increasingly impressed with AutoIt. Last week I had an automation project I had to do and began to code it in AHK only to run into several major roadblocks. For the automation I needed to travel a thirdparty application's tree view UI to find a specific entry and click it. Later in the automation I had to do something similar with a list view control. I had expected to find easy mechanisms or code samples to do it in AHK. To my surprise I found relatively little, the built-in functions related to the GUI creation of those elements not the manipulation of already existing elements. And the little sample code/DLLs I found didn't seem recently updated and didn't work (with AutoHotkey_L). I accidentally stumbled across AutoIt threads on the topic and was pleased to discover it was quite easy with AutoIt, and their official support of those features in their standard include libraries. And thus began my journey into AutoIt.
Here are my impressions:
- The language syntax of AutoIt is more consistent than AHK, and mostly for that reason I liked it more. When I first started with AHK I found it really confusing that AHK supported multiple distinct paradigms (foo = bar and foo := "bar" as well as the whole Foo(Bar) and Foo, Bar (not to mention Foo Bar, the first comma being optional!?). I still find myself making quite a few typos/errors related to these situations... Forgetting what's a normal function and what's the other style function, putting a := when I meant a =. I'm sure the explanation for all this is historical, but the lingering embrace of all the styles simultaneously is odd (why can't Foo, Bar be called as Foo(Bar) so that people can write to the new paradigm)!! Oh, not to mention the hotkey hooking/specification stuff right there mixed in with regular code, which also confused me.
- The packaging of the setup/install of AutoIt is impressive, including the SciTe editor, example code, the extended library of functions, x86 and x64 compilers, obfuscator, build tool, auto updater, and more. I haven't installed AHK recently, so maybe AHK does just as complete an install. I was just pleased that in my testing/development I had to set this up on 4 computers and I couldn't have asked for an easier time of it.
- AutoIt has embeddable compiler and obfuscator directives! You can embed commands in the source that will trigger obfuscation, generation of both x86 and x64 binaries in one compilation run, you can include resources, set the EXE manifest-related data including administrator elevation, PE details, etc. Very nice!
- AutoIt Help files are almost useless when compared to their AHK counterparts. The index list and the keyword search functions seemed to miss a great deal that should be in their documentation, and it seems as though they do not include many (if not most) of their official support library functions in the help documentation. If you do find the page you need in their docs then everything is okay, they have good examples and references, but I'd swear 60-70% of the time I couldn't find what I needed and had to jump over to their forums or search with Google.
- The AHK community is absolutely amazing, and it would be hard to top them in terms of friendliness, helpfulness, knowledge, code-sharing, etc. I have only been an observer on the AutoIt boards as I looked for other people's solutions, and so perhaps my observation is meaningless, but I saw more grumpy unfriendliness towards newbies than I'd remembered seeing on the AHK boards. (I'm not saying the AutoIt community isn't great, too, it probably is, it just might be a little less tolerant of newbies and their poorly researched questions.)
- AutoHotKey automatically handles most UI interaction logic for you (via gGotoLabelName calling identifiers in the various GUI element creation functions) whereas AutoIt requires you to create your own windows message processing loop with switch/select message to handle every interaction to which you want to respond.
- As mentioned earlier there's a distribution-included obfuscator, which seems pretty good. The quasi-lack of one with AHK has been an annoyance of mine; AHK_L doesn't do the password thing any more, and I never had much luck with Hotkey-Camo or anything else.
- I was impressed with how quickly I was able to jump right into AutoIt using my AHK knowledge. I imagine it'd be harder coming the other way, because of the unusual multi-paradigm AHK language thing. Both languages are remarkably similar in their use, with many functions being identical in name and use. Example: Send, Foo in AHK is Send("Foo") in AutoIt. Within a few hours I was able to automate a relatively complicated and branched Windows dialog flow (related to driver installation, involving tree view navigation, list view navigation, support for different scenarios on different versions of Windows, etc.).
In no way am I concluding that AutoIt is better than AutoHotkey, nor can I conclude the opposite. My love of AutoHotkey isn't wavering, but I am glad AutoIt was there for a task which seemed like it would have been harder for me to do in AHK with the existing public code. So if you ever find yourself in a similar situation you needn't feel shy about trying out AutoIt.
This morning my HP laptop strongly suggested I upgrade its included support software today (the HP Support Assistant). I foolishly accepted its offer and spent the next four hours ruing that decision, and trying to correct the damage it did. The upgrade somehow screwed itself up (rendering the HP Support Assistant broken) and along the way also screwed up my Visual Basic scripting support. I found numerous links which talked about related problems but none that fixed mine. Most solutions revolved around removing the registry keys for the VB scripting DLL and then re-registering the DLL. For some reason re-registering the DLLs didn't seem to work for me, despite running the command shell elevated.
Ultimately I exported the registry entries from another working Windows 7 x64 computer and merged them on the ailing laptop. I then uninstalled the HP Software Framework and the HP Software Assistant and then reinstalled them in that same order. And voila, at long last everything worked.
For anyone who needs them, here are the registry keys in question for fixing your VBscript install on Windows 7 x64: HP Support Assistant VB Scripting Registry Fix .
Included are registry keys (and DLLs) related to the 32 bit support and the 64 bit support. All you need to do is merge (by opening) the 5 registry keys included (the five .reg files in the two directories). I include the DLLs just for reference, in case your installation has a damaged DLL.
Flying multi-rotor (particularly quadcopter) radio-controlled vehicles is a lot of fun and you can do amazing things with them, in particular some beautiful aerial photography. While most r/c pilots do this by looking up at their craft from whatever their distance happens to be at the moment, a growing number of r/c enthusiasts are using FPV (first person view) to remotely control their vehicles. By using a tiny video camera with an attached transmitter the pilot can virtually fly their quadcopter as though they were a miniature pilot located inside it. Aside from just being a lot of fun, this perspective makes it possible to fly over far longer distances than one could by merely looking at the craft from the ground. The equipment to do FPV is not cheap, however, with decent entry-level setups of camera, transmitter, and receiver costing $1,000. And so I couldn't help but wonder why no one talks about using the ubiquitous smart cell phone with its included cameras as an alternative solution. The cell phone has a number of advantages over a typical FPV setup, namely that with the right apps running it can record its own video, it can operate over almost infinite distance with its use of cellular networks for data transmission, it can operate with very high bandwidth over 4G or wifi (though very limited distance with wifi), it can log its own flight path by recording its GPS positions, and if it crashes it can transmit its location to make recovery easy. With all these advantages in a package that can cost just $100 it's hard to imagine it not used by everyone!
From what I understand in talking to a few people, the issue comes down to video quality, latency, and the possibility that the connection just drops out. Few people want to risk their $1,500 and up r/c darling on Sprint's or AT&T's potentially spotty and variably efficient cell coverage. And the current video streaming software available as apps for the key FPV feature are not intended for mission critical, near real time transmission. Imagine trying to drive a car or pilot a plane with Skype. You might do just fine for a while, as long as sender and receiver have good signals but if either gets into trouble the video suddenly becomes erratic, delayed, and pictured objects become indistinct and it would be impossible to make critical operating decisions based on that.
Given the enormous complexities being overcome daily by dedicated enthusiasts of r/c flying this seems like a challenge that can fairly easily be overcome. One of the key pieces of software this community has developed and continually refined is the "flight controller", or the software which takes signals from the pilot's transmitter (the device with the joysticks that he uses to control the vehicle) and turns those into adjustments to motors and control surfaces. Many flight controllers now even come with amazingly sophisticated auto pilot features, like the ability to hover motionless in one spot (despite winds, etc.), to fly home in the event that signal is lost or a fault develops, to automatically land if batteries are low, and even to navigate on its own, flying between coordinates previously supplied to it. If all that can be achieved surely the FPV via cell phone problem can be overcome!
There are three problems that need to be addressed: loss of signal, degradation of signal, and overall quality of video.
Loss of video signal is clearly a very real problem and no amount of clever software can make up for a lack of inputs from the pilot, but such situations can be appropriately handled to minimize negative impact. As I mentioned above, many flight controllers now include safety modes such as hover, automatic return home, and automatic landing. The pilot with such an onboard autopilot can at the flick of a switch tell his craft to do the appropriate thing, presumably either hovering to wait and see if signal is restored or begin to return home at least enough to regain the signal. Also, the FPV app in the cell phone can be optimized to recover quickly from any network failure (once the underlying data connection is restored). And if this was to all develop and become more sophisticated clearly an integration of the FPV app and the onboard flight controller would be ideal, allowing the FPV app to command auto pilot features directly in the event of network loss as well as be the means of transmitting flight controller telemetry to the ground during routine flight (and perhaps controlling other features of the flight controller as well).
Degradation of signal and overall quality of video are related problems. The key here is, I believe, to develop a video codec or perhaps just an application of existing codecs that focuses on the critical visual data FPV pilots need. While users of video streaming applications like Skype want overall picture quality to be good, an FPV pilot is primarily focused on visual information related to the orientation of their craft relative to ground, potential physical obstacles in their path, and anything necessary to continue whatever flight motion they were executing. If the signal degrades and there is less bandwidth over which to send video it's most important that the critical parts of the pilot's picture continue to appear in near real time! How exactly one extracts or prioritizes those features I am not sure at this point, but I am fairly certain it's an achievable goal. One need only think of situations they have been in when their own vision has degraded due to environmental factors like darkness, fog, smoke, etc. to realize that our brains can seize upon very small and sometimes indistinct cues to maintain orientation. An algorithm could be developed to prioritize the sending of lesser quality video data related to horizon and to nearby obstacles (elements of the frame which the algorithm have noted move more relative to the overall background). And with the ongoing development in the areas of computers interpreting images to extract features like faces, eye positions, smiles, body positions, etc. it should present little challenge to have the FPV app be able to maintain an awareness of very crude items like the horizon and those objects most likely to represent near obstacles. This specific data could even be transmitted using ultra-low bandwidth as mere vector data rather than actual color images, in other words allowing the second receiving cell phone app to reconstruct the approximate figures overlapping whatever video may or may not be coming in.
Hopefully these things will see development in the near future because it can hardly be argued that the potential here is huge.
I recently bought the Blade mQX micro quadcopter and it is absolutely amazing! If you think it looks fun you'd be a fool not to buy one and see just how much. For $129 (Bind-n-Fly) or $169 (Ready to Fly) you just can't go wrong.
The mQX is super responsive, acrobatic, and (near as I can tell) indestructible.
This is my very first quad and very first "real" r/c anything; I've only owned two or three of those cheap, indestructible indoor foam helicopters. Stepping up to the mQX was definitely not easy, but neither was it too great a step to make. If I can do it, you can do it too! But I would ***strongly*** recommend buying the Phoenix R/C simulator and flying the Gaui 330X model (that's the only simulator I've seen with a quad model in it). I spent hours flying the Gaui in the balloon popping trainer of the Phoenix simulator and it made a profound difference on my flying ability. I had serious doubts that a simulator could help with real world flying, especially given that I was flying a non-mQX model, but was happily proven completely wrong. My real world flying made a quantum leap after a few nights of simulator flying
One of the most impressive things about the mQX is how inconceivably strong it is. In my early flights especially I would get disoriented and send the mQX screaming into the ground and it sustained absolutely no lasting damage (the canopy was destroyed and I had to bend a propeller blade back to straight a couple times but that's it).
And, coolest of all, the mQX is powerful enough to mount any of the tiny 808 cameras. I put a jumbo keychain 1080p camera on it and it seemed to fly as responsively as it always did, and for seemingly almost as long as it always did. It amazes me that I could have a HD aerial camera platform for under $200. (Obviously it's no $1,000 stabilized system, but on a very calm day with a good pilot consciously flying for a particular shot, it's stunning!)
I don't usually bother to evangelize a product, but this one deserves it!
If you're interested, check out my beginner's guide to flying r/c helicopters, planes, or quadcopters.
Now in my early post-Lytro days I've been wondering how I could achieve the same effect with better results, not wanting to wait the years it might take for them to come up with a suitable next generation model. Lytro's only real selling point at this moment is it's ability to take "living pictures" (their parlance), which really just means a photo which is interactive in as much as you can focus on different items in the picture by tapping those items. The technology may be capable of quite a bit more, but that's all it currently delivers, and it delivers that with poor resolution, graininess, and restrictive requirements on lighting/action.
Living Pictures without Lytro
Why couldn't I achieve the exact same effect with far better results using my existing digital camera? I could, and did! Here's my "living picture" proof, using just an ordinary digital camera and a bit of human assistance.
No Lytro was required for this "living picture", just an ordinary digital camera (in this case a Sony NEX 5). Click on different objects in this scene to change focus depth!
...and now for Lytro's version...
Lytro's "living picture"! Click on different objects in this scene to change focus depth!
It doesn't take an expert in photo analysis to see that the non-Lytro picture looks much better: sharper, higher resolution, less grainy, and more realistic colors.
Faking Lytro Manually
Making Cameras Support Lytro-Like Effects
Many cameras these days have a feature called "exposure bracketing" which takes reacts to a shutter button press by taking a series of pictures at different exposure settings. You then review the photos later and determine which photo looked the best. Why then could you not have a "focus bracketing" feature which does the same thing but with focus? The simplest approach would be to take multiple photos as the camera automatically walks the focus back from infinity to macro taking as many photos as necessary to achieve a desirable effect, perhaps as few as 5 or 10 would be needed to achieve a reasonable effect; with the aperture appropriately set any given picture's depth of field is wide enough to allow significant ranges to be sharply focused. You would then need some mechanism for assigning clickable regions to the photo frames which happened to be in focus in that region. This would likely be a fairly trivial software problem to solve. All of this could be done with minimal camera intelligence, since it would just be varying focus distance in a fixed manner. A far better but slightly more complicated solution would be to do automatic, intelligent focus bracketing using the camera's built-in autofocus system. Many cameras (particularly in phones) allow you to select a region of the scene which should be in focus. It would thus be easy for the camera to break down the scene into a search grid it would scan looking for objects upon which to focus, taking a single picture at each focus depth (one photo per range, according to the depth of field). The camera could record which grid location contained an object at a certain focal distance away, this being usable later to relate clicks on an image to a particular focused frame. The advantage of this approach is that it might be far quicker and more efficient, needing only as many frames as a scene objects' depths require. A scene which had two people in the foreground hugging and a church in the background would probably require just two photos to make a "living picture", people around a table at a birthday table with a cake in the center and a bounce house in the background may require 5 or 6 photos to make a Lytro-like image.
Working with Motion
These approaches share one significant weakness which is the fact that using multiple sequentially taken images negates the ability to capture any action-oriented scenes. While modern digital cameras take rapid-fire photos, and those with exposure bracketing take three or so shots in a half second, that's certainly slow enough to make any significant movement within the scene noticeable when switching between frames. Still, as action is easily blurred with the first generation Lytro, this hardly seems any sort of argument against this alternate approach. An interesting solution to this problem and that of the inability to easily alter most existing camera's firmware, would be to use a replacement lens that split a single digital frame into multiple differently focused reproductions of the scene. Just as I use a Loreo 3-D lens to merge the images captured by two lenses onto one digital frame, so to could one produce a system that would use four or perhaps nine lenses to capture one instant onto one digital frame through small lenses focused at slightly different depths. Software could then easily split apart the single digital image into its component frames and do an easy focus analysis to determine what regions in each were in focus, with viewer software showing those as appropriate in response to touch. The limitations of this approach would be related to the increased lighting requirements (or decreased action) as a result of the smaller lens, the expected poorer quality of each lens (related to cost and it being more a novelty manufacture than embraced by lens giants), and the reduced resolution (as your effective megapixel image would be the original value divided by the number of lenses within the lens). Many stereo photographer setups coordinate two cameras to take their photos in concert, getting around all these issues, which you could also do to solve this problem, though I can imagine nothing more cumbersome.
The Lomo Oktomat as seen on Lomography has eight lenses which it uses to 2.5 seconds of motion across a single analog film frame it has divided into 8 regions. The same multi-lens approach but used simultaneously, with each lens focused slightly differently, could capture motion with Lytro-like aesthetics.
Focus Bracketing leads to Focus Stacking (Hyperfocus)
As I began to look into the practicality of these approaches I was pleased to discover that "focus bracketing" was being done manually, though with an intriguingly different goal. Rather than produce a living picture where you can focus on different elements in a scene, a process called focus stacking is used to take and then (using software) merge photos taken at different focus settings to produce a single image in which everything in the scene is in focus. The software involved analyses each photograph in the stack, each of the identical scene where only the focus is varied, and uses the regions of each photo which are in focus to produce the combined image in which everything is in focus. This approach produces very impressive results. The only limitations to this system is the requirement for a still scene, and the strong recommendation (if not requirement) that you use a tripod when taking your shots so as little varies as possible.
Series of images demonstrating a 6 image focus bracket of a Tachinid fly. First two images illustrate typical DOF of a single image at f/10 while the third image is the composite of 6 images. From Focus Stacking entry on Wikipedia.
The aesthetic of a photo in which most things are in focus is quiet different from one in which only those things you select are in focus, but from a technical standpoint they are quite similar, since both situations require one possess the data pertaining to every element in a scene being in focus. And a viewer could (and likely would) be given the option of viewing such a photo as he/she wished. Do they want to see the photo traditionally (one, non-interactive focus point), as a "living picture" where they can choose the object in focus, or as a photo in which everything is in focus?
Focus Bracketing Available Today on your Canon PowerShot
A little further research led me to find a rather intriguing ability to add automatic focus bracketing to an entire range of camera models, via the Canon Hack Development Kit (CHDK). CHDK allows you to safely, temporarily use a highly configurable and extensible alternative firmware in your Canon PowerShot. And users have used this to add focus bracketing for the purposes of focus stacking, and included detailed instructions on just how you can do it, too.
Coming Soon as an iPhone & Android Camera App
This integration of camera and software is a natural fit for an iPhone and Android app, where the app can control the capturing of the image and intelligent variation of focus and then do the simple post-processing to make the image click-focus-able. While I haven't seen such an app, I'm sure it'll come soon. I'd write it myself if I had the time.
Until the Future Comes
The point is that until Lytro demonstrates just what can be done with a light field camera, beyond merely creating a low-resolution "living picture", there's really no technical justification for placing the technology in people's hands when the same problem could be solved as effectively with traditional digital cameras. If demand existed (and perhaps it will come) for this image experience, no light fields need apply. Hopefully traditional digital camera companies will see the aesthetic value and include support in their firmware (for intelligent focus bracketing) and co-ordinated desktop software, app developers will launch good living picture capturing app cameras, and hopefully Lytro will demonstrate the additional merits of capturing and reproducing images from light fields.
There are few musicians I react to quite like I do Andrew Bird. The notes coming out of his instruments I enjoy quite a lot, but his lyrics I find distractingly infuriating. His lyrics remind me of the comments a teacher's pet might make after being called upon in a high school physics class. His goal is to convince everyone, and probably first and foremost himself, that he's smart. Andrew Bird's lyrics drip with this unnatural self-congratulatory alleged cleverness, weaving supposedly big words with arcane references. He reminds me a lot of the columnist George Will, who seems to feel compelled to include in every column at least 5 - 10 words no ordinary citizen of Earth has heard within their lifetime. We get it, you guys want us to think you are very smart! Congratulations, someone give them a f-cking prize. Now, please get on with the business of being understandable and understood.
For some reason Andrew Bird reminds me of San Francisco. I've only visited a few times, and while I loved it, I couldn't help but feel that the city is a whole is just a bunch of hipster people trying to out-cool each other while everyone else is left to do the real work of running the nation while they smile and take all the credit (see Apple and iEverything, Google and gEverything).
Oh, and a similar-ish musician who I think does it just right, being a musical super genius without trying to beat you over the head with it, see Beirut.
I pre-ordered the Lytro back in October, excited by the stunning live demos on their site. A camera that captures a focus-less light field and allows you to do the focal interpretation later was just too amazing not to buy. The potential and real advantages were immediately obvious: stunning "live" photos, potential for effective single lens 3-D capture, potential for faster picture taking with no need to wait for auto focus, no out of focus pictures, ability to take better pictures in worse light conditions, ability to capture subtler image detail across the objects photographed.
My Lytro was one of the first to be shipped and it arrived just three days ago. Sadly Lytro's absolute requirement that you have a Mac with version 10.6.6 or higher to view your photos, it wasn't until today that I could actually try it out. My initial excitement that had become frustration rebounded at the chance to see just what this camera could do! Sadly upon viewing the photos I'd taken it beat a hasty retreat.
The Lytro is cool, but I cannot imagine myself actually using this thing in my daily life. It can take amazing pictures, as proven by the stunning live demos on the Lytro site. But I now appreciate just how many pictures must have been sifted through to pick out those hypnotically good ones. If you've got the time and the artistic inclination I have no doubt you can and will do amazing things, but the vast majority of my shots look awful.
Here's what I discovered:
- The camera's effective resolution is low! Your pictures are 1024 x 1024.
- The lens requires a lot of light*! Unless conditions are right your images will be extremely grainy.
- Everything must be still*! Motion, both your own and other people's, must be minimized otherwise your photo will likely be blurry.
- Mac and only Mac! Unless you normally use a Mac daily (which I don't) you're just going to be annoyed by the absolute necessity of the Mac software. You cannot view images or export images without using the Mac software.
* Obviously capturing a scene sharply involves a trade-off between light and motion; less light is fine if all is still, and more motion is fine if there is enough light. I'm just saying that in "ordinary" life situations where people move, where light can be low, and where your hand isn't stabilized, this camera can be trouble.
My experience of my Lytro has, therefore, been pretty disappointing. I imagined myself taking this camera with me everywhere, eager to capture "living pictures" to use their lingo, freezing moments in a manipulatable form. But now I imagine carrying this thing around would only breed frustration as I could never rely on images I took coming out right. Some would stun but all too many moments would be unenjoyably grainy and blurry. As it stands I'm better off with the clearer dimensional realities captured by my ever-present Evo 3-D and where appropriate my Sony NEX 5.
And so my Lytro is now up for sale on eBay. If it were more amazing or much cheaper I'd keep it for those special moments where I could afford to experiment, but at $499 I don't want to be a guinea pig.