I love C#. It takes everything I loved about my years programming in early Java and adds boatloads of wonderful. But, there is one thing that perpetually infuriates me. All C# source code includes coder-included class, type, and other references that are all relative to a list of "using" statements at the top of the source code file and also a list of libraries managed separately in the Visual Studio UI. The problem is that all the wonderfully helpful source code that people post on the web never includes these UI-managed library references, which means that any time you copy and paste those bits of C# source code you will get lots of squiggly red lines telling you that Visual Studio has no idea to what the classes, types, etc. given in the source code refer. And since I've just searched the web because I didn't know how to solve a problem or because I'm learning some new framework or paradigm via some example on a page, I usually have no idea what library or libraries I need to reference in order for all the dependencies to be satisfied.
Case in point, I just had a look at the Google Drive API "Quickstart". They show you a simple snippet of source code you are supposed to try yourself. They do not give you a Visual Studio project, just the code on the screen you need to copy and paste. They also tell you to download the API libraries. I did. The library has what looks like a common library directory with 10 or so DLLs (and various other files) and a separate folder with 45 folders for various "Services" and inside those more DLLs. And I am somehow supposed to know which DLLs this 20 line piece of source code needs??? So to be safe I end up including all the common libraries and both the libraries under the "DriveService". But the code won't compile. All the references are satisfied but now there's ambiguity between because an extension method is defined in two separate imported DLLs. It takes me another 20 minutes to figure out which one I don't need. Why do we have to go through all this??? It is all so utterly needless. I can't tell you how many times I've been unable to try out a piece of source code because something has gone wrong in figuring out and finding the libraries that were needed, and which versions of the libraries were needed (since libraries can radically change with every release).
What boggles the mind is that neither Visual Studio nor third-party VS plugins like ReSharper do anything to help. Surely something could be done to largely eliminate this problem! At the vary least, why couldn't they include a "header" like region at the top of the VS editor UI which lists the actual fulfilled references for that active file. It wouldn't actually be part of the source code, it would just be a handy little (perhaps collapsed) virtual piece of commented code that would be copied whenever you ctrl+a ctrl+c file contents. And when you pasted it elsewhere it would let people know what out-of-band files they were missing. The format would probably just specify the Portable Executable data for the file and the hash (not the actual path which would be less useful and less anonymous).
Here's hoping they do it one day, or someone makes a nice little third-party plugin that is able to sort it out for you (by having a massive DB of exported library functions and some good heuristics)...
All certificate signing agencies basically do the same thing, they provide a means by which a user browsing a site or using a piece of software can know who is operating the site or writing the software. Code signing (and signing in general) is a wonderful thing. I fully believe in it. But you don't need these centralized commercial entities to provide it. And I'm just not convinced of the value add of signing authorities which charge a lot of money to (in my view) add only a thin veneer of security.
The vast majority of those applying for certificates are surely entirely legitimate and provide entirely legitimate details. That means that the vast majority of certificates signing authorities give out are entirely valid. But that's not proof that the system is good. Surely the effectiveness of security is determined not by those who intend to stay within the law but those intend to violate it. Airport security is not good because it finds no bombs on lawful people, it is only good if it is able to find bombs on unlawful people. Certificate agencies' version of due diligence is laughable, they generally require nothing more than emailed (or faxed) images of desired documents. Could someone submit easily photoshopped documents to a signing authority and have their credentials "validated" such that they get a signing certificate? Yes, it's been done. And even if the certified owner was valid at the time the certificate was issued, the security provided to end users (those looking are supposed to rely upon the certificates) is fleeting at best; the certificate owner can always move, disconnect their phones, or give the certificate to others.
Now in no way are these signing authorities radically different than the purveyors of other more traditional security products. It's true that we put locks on our houses and secretly know they would keep out only the laziest or stupidest of criminals (lock picking being an easily acquired skill and glass being easily broken), but signing certificates have the potential to be so much better. The fact that they are not, and that they cost so much for not being so superior to self-signed certificates, frustrates me. I just wish signing authorities would either do more (require you show up face-to-face at an office with a passport to be fingerprinted and DNA mapped) or do less (acknowledge how easily they may be deceived and not make you jump through hoops to proffer false 'proof').
But here we are in the land of is... God bless the industry of false security.
The free, multi-party video conferencing offering Google+ Hangout is a pretty fantastic alternative to Skype (and its paid multi-party option). Wouldn't it be wonderful to have a UI control you could drop into any .NET application that gave you all the power of Google+ Hangout? Well, it would... and I've been working on it, but so far it's not proved easy.
First a bit of back story. I have been working on an app which features embedded video conferencing and had gone initially with Skype. Skype has been a somewhat miserable experience thus far, workable but only just. The only way I've been able to integrate with Skype thus far has been their Skype4COM option. Skype4COM allows you to remote control certain features of Skype from a third-party application. You can initiate calls, hangup, mute, and things but you can't hide the original Skype interface or embed its video in your own application. There is a way to do all that, and it's SkypeKit. But for reasons unknown to me they seem to have suspended SkypeKit access. I applied to the program many months ago and my account still says something like, "We'll get back to you about SkypeKit when we're ready for you." I've heard from others that that's just the way it is right now, that they are redoing SkypeKit or something. At any rate... Skype isn't a great solution at the moment. Google+ Hangout on the other hand would be perfect, if only it worked.
I spent a few days a few weeks ago trying to create a Windows control that would let me embed Google+ Hangout inside a Windows control. The logical approach to do that would be to customize a web browser control to load up the web-based Google+ Hangout and just modify the rendered content and inject JS as necessary to achieve the desired control-ifying of Hangout. I've done that before, so I didn't think it would prove so tricky.
Microsoft WebBrowser Control
I first tried using the built-in Microsoft WebBrowser control as the hosting control. I automated Google account sign-in and had it load up the Google+ Hangout page. I hit the first major roadblock. The page gave me a warning about my browser agent not being supported. I went back and added code to spoof it, but that didn't seem to work as the WebBrowser control isn't all that sophisticated and only spoofs the user agent for the first request, not subsequent ones or ones that the loaded page fetches. I tried several alternative WebBrowser extension classes that try to intercept navigating requests and replace them with navigate calls that include the spoofing, but they didn't seem to work properly. If memory serves I did reach a point where I was able to call the JS to start a hangout but everything hung when it tried to install/start the hangout.
The next option I tried was Awesomium, a Chromium-based behind the scenes browser rendering system. After looking at some of their examples and struggling a bit with their concept (which differed radically from the WebBrowser control and MozNET control approaches I was used to) I ended up realizing I could use one of their demo apps as a quick way to test the concept. They had a tabbed web browser demo which I used to access Hangout. I was able to initiate a Hangout but the video of the Hangout was not contained as it should have been within the Awesomium demo browser, the Hangout window was at the top left of the screen whereas the browser was in the middle. So it worked but if they couldn't control where it was rendering then I didn't think an Awesomium would be an easy solution.
MozNET / Xulrunner
Next I tried my old friend, MozNET. MozNET is a XULrunner implementation which I've quite enjoyed using before. There again I wen the easy route first and used a demo browser example to see if I could get it working. Sadly it did not work. It would just hang at the step where Hangout is checking for its plugin. I feel like a MozNET solution wouldn't be too hard to achieve, but I don't have the depth of knowledge in it to make it happen easily. I know MozNET can be made to work with various XPI-based plugins.
Oddly enough, Google+ Hangout doesn't seem to be an XPI plugin. I did a procmon.exe dump of a Firefox when using Hangout and I see access to:
And a separate EXE gets launched:
C:\Users\foo\AppData\Local\Google\Google Talk Plugin\googletalkplugin.exe
But I'm not sure how what is doing the communication between Firefox and the Hangout code.
If anyone has any thoughts they'd like to share, please let me know! I think the world would benefit from an embeddable Google+ Hangout control... I know I would.
I couldn't help but be a little intrigued by all Raspberry Pi hype. A computer smaller than a deck of playing cards, able to run Linux/ChromeOs/etc. and costing only $25-35 (depending on the model), sure sounded interesting. There are no end to computing projects I have in mind to undertake, so this seemed the perfect platform for them, particularly when the Raspberry Pi community is so friendly and supportive.
Well, having had my Raspberry Pi (model 2) for a week now I can certainly say that it's cool alright, but I'm increasingly convinced that its use in the desktop-related computing projects I had in mind is severely limited. The official Raspberry Pi Debian release runs, and includes a resource friendly web browser and other resource friendly apps, but attempting to run anything else is painful. One project I am working on uses JonDo, the magnificent privacy proxy, so I tried to see if the JonDo client would work with Raspberry Pi. It does install, and run, but it is painfully slow as to be utterly unusable (perhaps because of the Java overhead or perhaps because of the encryption demands). So much for that.
The thing I love most about the Raspberry Pi so far has less to do with it and more to do with the discontinued Motorola Lapdock. A couple years ago some people at Motorola and elsewhere thought that what people really wanted was a way to use their phone as a laptop and I remember all the hype surrounding the "lapdock" which would let you do just that. Unfortunately, at a price of $500 people really didn't want it, opting instead for cheaper $250 netbooks and $250-600 iOS/Android tablets. Sad for Motorola but great for anyone now because these over-produced lapdocks have been hitting the deep discount sales sites for the last year or so, currently selling them for $49! What you get for $49 is a fabulously elegantly, slim 10" display with keyboard, touch pad, and built-in rechargeable battery back! I seriously know of no better tech deal ever! Now, the cool part is that rather than use some proprietary connectors the lapdock uses separate micro HDMI and micro USB connections, and being universal standards you can connect a Raspberry Pi or anything else you want up to these connectors! I bought a second Motorola Lapdock to use as part of my emergency computer repair tool kit, with this thing and a few cables I've got a mobile keyboard/mouse/monitor I can hook up to any down server or computer with questionable peripherals.
In the case of Raspberry Pi this means that for $49 (Motorola Lapdock) + $35 (Raspberry Pi model 2) + $10 (cost of cables) you have a $94 laptop. Admittedly it's a pretty underwhelming laptop in a field where vastly more powerful laptops can be had for just over $200, but still... If you're buying a Raspberry Pi for anything other than experimenting then you're doing it wrong.
Watch the video above to learn what cables you need and how to modify them; the girl in the video throws me off a bit, I think it's the Ferdinand the Bull nose ring and reddish hair. Also, check out this cool modification to learn how to add a super capacitor to your Raspberry Pi as a great little backup battery/brownout protector (which is particularly useful with the lapdock).
In a moment of anything but wisdom Microsoft has decided to leave earlier versions of the .Net (dotnet) Framework out of the Windows 8 install, including only 4 and 4.5. The reason they give for this peculiar decision is their desire to have a smaller OS install footprint. While less disk space lost to an OS install is a very noble goal, I can think of few things worse to leave out. Any user with Windows 8 who subsequently downloads and wants to use an application written against the 3.5 or earlier .Net runtimes will be forced to install (over the 'net) a reboot-required multi-hundred megabyte installer (supporting .Net 3.5, 3.0, and 2.0). Few things deter a potential user of your software more than a lengthy download and a forced reboot.
Adding insult to injury is that I am quite sure their smaller OS footprint goal is little more than an attempt to defend against one of Apple's (and others) easy anti-Windows attacks. Unless Microsoft has radically altered the way they handle Windows Updates, their Driver Store, WinSXS, temporary files, etc. then whatever savings they claim at initial install will be gone in a few months; the Windows directory of my 1.5 year old computer is a whopping 37 GB.
Why couldn't Microsoft leave out MS Paint, MS Write, Solitaire, audio recorder, Pinball, or hell, even Internet Explorer, and include the full range of .Net support? Now us poor developers are going to need to once again need to distribute versions of our software targeting multiple runtimes just to ensure most users don't have to do the absurd .Net installs.
I've been a huge fan of and user of AutoHotkey (AHK) for years, but I've got to admit (with a sense of betrayal) that I'm increasingly impressed with AutoIt. Last week I had an automation project I had to do and began to code it in AHK only to run into several major roadblocks. For the automation I needed to travel a thirdparty application's tree view UI to find a specific entry and click it. Later in the automation I had to do something similar with a list view control. I had expected to find easy mechanisms or code samples to do it in AHK. To my surprise I found relatively little, the built-in functions related to the GUI creation of those elements not the manipulation of already existing elements. And the little sample code/DLLs I found didn't seem recently updated and didn't work (with AutoHotkey_L). I accidentally stumbled across AutoIt threads on the topic and was pleased to discover it was quite easy with AutoIt, and their official support of those features in their standard include libraries. And thus began my journey into AutoIt.
Here are my impressions:
- The language syntax of AutoIt is more consistent than AHK, and mostly for that reason I liked it more. When I first started with AHK I found it really confusing that AHK supported multiple distinct paradigms (foo = bar and foo := "bar" as well as the whole Foo(Bar) and Foo, Bar (not to mention Foo Bar, the first comma being optional!?). I still find myself making quite a few typos/errors related to these situations... Forgetting what's a normal function and what's the other style function, putting a := when I meant a =. I'm sure the explanation for all this is historical, but the lingering embrace of all the styles simultaneously is odd (why can't Foo, Bar be called as Foo(Bar) so that people can write to the new paradigm)!! Oh, not to mention the hotkey hooking/specification stuff right there mixed in with regular code, which also confused me.
- The packaging of the setup/install of AutoIt is impressive, including the SciTe editor, example code, the extended library of functions, x86 and x64 compilers, obfuscator, build tool, auto updater, and more. I haven't installed AHK recently, so maybe AHK does just as complete an install. I was just pleased that in my testing/development I had to set this up on 4 computers and I couldn't have asked for an easier time of it.
- AutoIt has embeddable compiler and obfuscator directives! You can embed commands in the source that will trigger obfuscation, generation of both x86 and x64 binaries in one compilation run, you can include resources, set the EXE manifest-related data including administrator elevation, PE details, etc. Very nice!
- AutoIt Help files are almost useless when compared to their AHK counterparts. The index list and the keyword search functions seemed to miss a great deal that should be in their documentation, and it seems as though they do not include many (if not most) of their official support library functions in the help documentation. If you do find the page you need in their docs then everything is okay, they have good examples and references, but I'd swear 60-70% of the time I couldn't find what I needed and had to jump over to their forums or search with Google.
- The AHK community is absolutely amazing, and it would be hard to top them in terms of friendliness, helpfulness, knowledge, code-sharing, etc. I have only been an observer on the AutoIt boards as I looked for other people's solutions, and so perhaps my observation is meaningless, but I saw more grumpy unfriendliness towards newbies than I'd remembered seeing on the AHK boards. (I'm not saying the AutoIt community isn't great, too, it probably is, it just might be a little less tolerant of newbies and their poorly researched questions.)
- AutoHotKey automatically handles most UI interaction logic for you (via gGotoLabelName calling identifiers in the various GUI element creation functions) whereas AutoIt requires you to create your own windows message processing loop with switch/select message to handle every interaction to which you want to respond.
- As mentioned earlier there's a distribution-included obfuscator, which seems pretty good. The quasi-lack of one with AHK has been an annoyance of mine; AHK_L doesn't do the password thing any more, and I never had much luck with Hotkey-Camo or anything else.
- I was impressed with how quickly I was able to jump right into AutoIt using my AHK knowledge. I imagine it'd be harder coming the other way, because of the unusual multi-paradigm AHK language thing. Both languages are remarkably similar in their use, with many functions being identical in name and use. Example: Send, Foo in AHK is Send("Foo") in AutoIt. Within a few hours I was able to automate a relatively complicated and branched Windows dialog flow (related to driver installation, involving tree view navigation, list view navigation, support for different scenarios on different versions of Windows, etc.).
In no way am I concluding that AutoIt is better than AutoHotkey, nor can I conclude the opposite. My love of AutoHotkey isn't wavering, but I am glad AutoIt was there for a task which seemed like it would have been harder for me to do in AHK with the existing public code. So if you ever find yourself in a similar situation you needn't feel shy about trying out AutoIt.
If you're like me you're a decent law-abiding citizen who feels that privacy is a fundamental right, not merely something we enjoyed by default because technology had not yet found a way to eliminate it. Fortuntely, technology brings us both problems and solutions. One such solution is JonDo, a popular and somewhat proven anonymous proxy service. This article will show you how to create a secure, anonymous browsing platform to ensure your right to free thought and inquiry preserved.
Create the Virtual Machine
First we need to take the ISO of the JonDo Live CD and turn it into a virtual machine. I'll walk you through those steps. It's important to note that we are not creating a persistent install here, that's beyond the scope of this article and with JonDo still being beta I'm not sure I'd recommend it. The install we are building will let you make changes to the file system but those changes would be lost when the virtual machine is rebooted. We're going to cheat a little and use VMware's snapshot feature to lock in any file system changes we want, and use VMware's host-guest shared folders to let us make some file system changes effectively persistent. But all that is to come after we do the basics!
- Download the latest JonDo Live CD
- Verify the hash of the file you downloaded with the MD5 hash listed on the download page. I recommend Hash Tab for Windows or Mac users.
- Create a new virtual machine in VMware.
- Choose Typical
- Set the "Installer disc image file (iso)" as the JonDo Live ISO file you downloaded. Click Next.
- Choose Linux as the guest operating system and Debian 5 as the version. Click Next.
- Choose the name of your virtual machine (e.g., "JonDo Live")
- Choose the location where you want the files to be. Click Next.
- Choose a small maximum disk size, I choose 1 GB. With my current setup I don't even use it. Click Next.
- Click "Customize Hardware".
- I increased the memory to 1 GB
- I added a second CD ROM drive, defined as an ISO pointing to the VMware Tools (e.g., C:\Program Files (x86)\VMware\VMware Workstation\linux.iso (if you do this you may need to set the drive as initially not connected otherwise VMware might try to boot off this cdrom device instead of the one with the live image, depending on how VMware orders the drives, you will then just need to connect the drive from the VMware lower toolbar once you've booted into the OS)
- I removed the floppy drive
- I set the Network Adapter as Bridged with replicate physical network connection state.
- After leaving the customize hardware screen, uncheck the power on after finishing option.
- (Optional) I now "Edit Virtual Machine Settings" and on the Options tab I go to "Shared Folders" and create a share which is "Always enabled"; I called my share "shared". Reminder, this Live CD VM is not a persistent install, so this is where you can keep files/settings/etc. you don't want to risk losing.
- Power on this Virtual Machine
- When you get to the boot menu choose the "486" option (not failsafe, not 686, and not anything with PAE)
- When you boot it may say you have no network connection, click the network icon in the task bar and choose "Auto Ethernet". You should now have a network connection.
Begin Using JonDo
Your JonDo Live VMware virtual machine is now ready to use!
Before you go and do a lot of anonymous browsing you really should install the VMware Tools, it will greatly enhance your overall experience of this virtual JonDo machine.
Install VMware Tools (optional)
You are perfectly free at this point to use your JonDo Live virtual machine, but the beauty of VMware is its ability to allow you to flit between host and guest operating systems, effortlessly moving your mouse, sharing your clipboard, exchanging files, and resizing the display.
These steps are a little annoying but a few hours of my working through the issues will hopefully make it easy enough for you. The reason we can't just directly install the VMware Tools is because it has dependencies which are not fulfilled by the JonDo Live image as delivered.
- Go to a terminal window (click the terminal icon on the bottom task bar).
- Type "sudo bash" to get a root shell.
- Type "apt-get install make"
- Type "apt-get install gcc-4.1"
- Type "apt-get install linux-headers-`uname -r`". If you get the error "can't find any package" then the linux headers for your kernel version may no longer be in the repository, you'll need to find a repository that has it and add that to the /etc/apt/sources.list. If you got an error related to not finding something needed for the install then run "apt-get update" to update its list of packages and re-run the install of linux headers. (See below for more info if you are having trouble with finding the appropriate kernel header sources.)
- Type "apt-get install psmisc"
- On the Desktop right click the "VMware Tools" CD icon and select "Mount". Its contents will now be located as "/media/VMware Tools"
- Type "cp /media/VMware Tools/VMwareTools-8.4.8-491717.tar.gz /tmp" to copy the tools archive to the /tmp directory (modify the file name as needed to accommodate future versions)
- Type "cd /tmp"
- Type "gunzip VMwareTools-8.4.8-491717.tar.gz"
- Type "tar xvf VMwareTools-8.4.8-491717.tar"
- Type "cd VMwareTools-8.4.8-491717"
- Type "./vmware-install.pl" to begin the installer
- Choose the defaults for everything they ask (just hit enter/return each time)
- When it is finished type "/usr/bin/vmware-user" to start up the VMware Tools
Congratulations! You now have the VMware Tools installed.
Your shared folder is available inside the JonDo VM at "/mnt/hgfs/shared".
Additional Kernel Header Sources
On a recent update of my JonDo Live environment I found that the kernel headers were removed from the default repository and I couldn't seem to find it anywhere... After some hours I figured out how to solve the problem. You can manually find the Debian packages for linux headers and then manually install them. The site which has these archived repositories http://snapshot.debian.org, which you can use to see into the past by specifying a date/time combination to navigate the archive.
The way I located the files I needed probably isn't the best, but here's what I did. First, I navigate to the root of the dated repository. For example, http://snapshot.debian.org/archive/debian/20120806T041225Z/ shows the repository state on August 6th, 2012. This date was soon after the release of the kernel version I had (found with uname -a). There are two Debian packages for Linux headers, the "common" and then the architecture specific one. You will need to manually download both of those files and then manually install them.
First I found the Packages.bz2 file which lists all the various packages. You'll need to download, uncompress, and view this file. My dated one was located here: http://snapshot.debian.org/archive/debian/20120806T041225Z/dists/wheezy/main/binary-i386/Packages.bz2. Manually search that file for a package called linux-headers-3.2.0-3-486 (substitute your `uname -r` entry for the OS version I mention). You will see a path there that corresponds to a location off the root (e.g., http://snapshot.debian.org/archive/debian/20120806T041225Z/). That package has a dependency on the "common" header library, so we now need to find that one. Looking again in Packages.bz2 I found the entry for "linux-headers-3.2.0-3-common" (modify for the version you have) and then download the package from the location indicated. Once you have them downloaded you manually install them. Install each by running the "dpkg -i PACKAGENAME.DEB" command, start with the "common" package.
Once you install both packages you can proceed to step 6 above!
Making your Environment Persistent (Optional)
After you've gotten everything configured, including importing your existing JonDo account info or creating your premium account, you want to save the configuration work you've done so you won't lose it if the virtual machine reboots. All you need to do is use the "VM" menu, click the "Snapshot" menu item, then choose "Take Snapshot". As you likely know, this allows you to return to this exact state of the machine at any future time, complete with the file system, memory, display, etc. exactly as it was at this moment. Instead of booting or rebooting your JonDo VM you can just revert to this snapshot. Any files you wish to be persistent and not see reverted or erased you should put in the shared folder you could have optionally created. For example, I keep things like downloaded files, bookmarks, my JonDo exported credentials, etc. in this shared location (e.g., /mnt/hgfs/shared).
Securing your Data Locally (Optional)
To further ensure your privacy you can (and probably should) make sure your virtual machine files (the files VMware uses to store your VM data) are encrypted, either the files themselves (using Windows built-in encryption option) or, better still, by placing the entire directory inside an encrypted virtual drive, with such products as the free TrueCrypt. Be aware, however, that when you use your virtual machine its RAM will be held in your real, physical RAM and as such it can and will be stored in the host's Windows pagefile.sys, where it could potentially be recovered much later, having been written to disk. The solution in this case is to encrypt your entire system disk with TrueCrypt, such that the swap file is also encrypted or to use an encryption product like Jetico's container encryption which includes swap file encryption as an option.
It is sad that it's come to this, that we honorable, law-abiding citizens must defend ourselves against the unreasonable invasion of our thoughts and study of our activities, but wishing it was not so accomplishes little. Hopefully this little guide will have helped you take back some of your privacy.
Today I had a very bizarre problem. I was trying to copy a large (50 GB) file from a laptop's hard drive to an external USB drive. I'd already copied an even larger file (150 GB) one just a few hours earlier without incident. But this 50 GB file would begin at full speed (about 30 MB/s) and then when start to get progressively slower as it approached about 8 GB and would essentially be stalled by the time it reached 9 or 10 GB transferred, slowing to the point where it was clear the transfer would never get any further. I tried every recommendation I could find relating to slow copying and external USB drives, I updated the external USB drive's firmware, I set the drive to be optimized for performance, I tried rebooting, using several different copy methods, and always the same result. Because of some initial slowness with this 50 GB transfer I'd begun using the Windows Performance Monitor to watch just what might be slowing down the file copy. This allowed me to resolve the initial problem, Raxco's PerfectDisk was trying to defrag as I was doing the copy. But after PerfectDisk was off the problem remained, or at least persisted in a slightly different form. One odd thing I noticed in Performance Monitor was that the wait for the drives in question would be at 0 and then suddenly jump to 5 seconds, and back, all while the disk appeared to be doing almost nothing. After a while I used System Internal's great Process Monitor software to let me know of anything that involved the path of the source or target disks and there it was, perfmon.exe was the only other thing accessing the target drive. I shut perfmon.exe down and the speed went from the languid 40 KB/s it had become back to the normal 31 MB/s. Apparently perfmon.exe has a little problem!
So the lesson of the day is: Don't leave perfmon.exe running (for long) when doing big file copies!
I'm only now getting around to documenting this (my memory was jogged for some reason the other day), but back in 1998 I came up with an idea for adding a new "dimension" to password protection schemes without actually requiring that the user do anything different. The new "dimension" was time, specifically the timing of the user's keystrokes as they entered their password. I developed a working prototype which observed a user's keystroke behavior as they entered their password, recording the length of time they held each key down as well as the length of time between each key stroke. My prototype code then turned this data into a somewhat robust signature which could be stored and used for comparison at future logins. The signature method was designed to stand up to "normal" daily variations in typing speed and coordination while still generating the same representation; the sensitivity could be adjusted by tweaking a series of constants. I captured a few hundred samples of people typing in their passwords over several days in order to establish to my own satisfaction that the idea and its initial implementation were solid. The elegance of the idea is that it imposes no new requirements on users or the passwords they choose. The user does as they always do and the system would offer the additional protection.
A few items which I did not address in the prototype but would clearly need to in an actual implemented system. If a person changes their password you can expect that for some time the typing signature will be in flux, adjusting as their fingers adapt to the new formulation of letters and characters they've chosen. The system must recognize and allow for these changes, replacing the stored signatures over time to reflect these changes. It's important to note that certain situations will make the signature less consistent, such as occurs when a user only infrequently uses that particular password. Also, specific incidents, like injury would alter the signature. In all these cases where the new and old signature do not match a new check procedure would need to be added. This secondary check could include asking them to verify some additional piece of information, such as would have been asked for password recovery (e.g., mother's maiden name, name of their first pet, etc.), or perhaps access being temporarily denied, with alerts being sent to the user by email, requiring them to step through some authentication procedure.
The idea may not have been as advanced as retinal or fingerprint scanning, but I think it was still a good one, and I remain surprised I've not seen it developed.
My friend Arvin sent me this link to just such a system http://technology.timesonline.co.uk/tol/news/tech_and_web/personal_tech/article1667057.ece. Another potential patent slipped through my fingers.
If you're serious about playing around with Android I urge you to check out my article on how you can convert a $249 Barnes & Noble Nook Color e-reader into a full Android tablet! I just did it and it's turning out to be one of the coolest gadgets I've had!
Tonight I wanted to play around with the Google Android OS for mobile devices, but having neither an Android tablet or phone I was forced to investigate how I could run it on my computer. I found the answer I was looking for and succeeded in running it on my PC. And here is my super quick guide on how you can do it, too.
You will need the virtual machine software VMware Player or VMware Workstation. If you don't have either, you can download and install VMware Player for free.
Grab the Android Live ISO, the one to use is the Asus Eee PC version. (I tried the generic version and it wouldn't even boot under VMware.) You can navigate to the latest version here or just use this direct link for the 2.2 version.
Configure the VMware Player or VMware Workstation options for this VM. You want to choose:
- CD/DVD pointed at the ISO file you just downloaded for Android
- 512 MB memory
- Any network setting should work (BUT, you will need to follow the instructions in step 3)
- Sound card should be changed to "SB X-Fi Audio"
- 2 GB IDE hard disk (optional)
With the VM powered off, modify the .vmx file that VMware created using a text editor. You MUST change the existing line to now read:
ethernet0.virtualDev = "vlance"
If you don't make this change you will have no network access in Android!
Power on the Android VM and from the bootloader screen choose the first option and everything should work!
Making it Permanent
The above works great for getting a feel for Android, but because this is a "live" version of Android using a ram disk for temporary storage, all your changes will be lost when you shutdown or reboot. To make your environment permanent it's actually very easy:
- Reboot the virtual machine (Power > Reset in VMware)
- Choose the "Install to hard disk" option from the bootloader
- Create a single primary partition in the partition editor, using all available space. Make the partition bootable. Quit the partition editor.
- Allow it to install the OS to the selected partition, using ext3.
- Allow the installer to use Grub as your boot loader.
- Do not attempt to create a virtual SD card (I didn't investigate how this works, so when I tried it it appeared to overwrite the OS I just wrote to disk. So don't do this unless you know what you're doing.)
- Choose to Run Android x86 when asked.
And now you've got a permanent Android x86 virtual machine!
Certain features are not supported by Android x86, primarily those applications which require devices missing from the virtual machine (e.g., the camera). Other applications such as the YouTube application appear to work except that it does not seem to play videos; I suspect this may have to do with specific hardware acceleration missing from the virtualization. Also, see the many debugging and virtualization related options in the app list; you can do things like spoof geolocation. While limited in some respects, this is an excellent tool for testing and debugging your web and mobile apps on Android.
Have fun playing around with it!