I love C#. It takes everything I loved about my years programming in early Java and adds boatloads of wonderful. But, there is one thing that perpetually infuriates me. All C# source code includes coder-included class, type, and other references that are all relative to a list of "using" statements at the top of the source code file and also a list of libraries managed separately in the Visual Studio UI. The problem is that all the wonderfully helpful source code that people post on the web never includes these UI-managed library references, which means that any time you copy and paste those bits of C# source code you will get lots of squiggly red lines telling you that Visual Studio has no idea to what the classes, types, etc. given in the source code refer. And since I've just searched the web because I didn't know how to solve a problem or because I'm learning some new framework or paradigm via some example on a page, I usually have no idea what library or libraries I need to reference in order for all the dependencies to be satisfied.
Case in point, I just had a look at the Google Drive API "Quickstart". They show you a simple snippet of source code you are supposed to try yourself. They do not give you a Visual Studio project, just the code on the screen you need to copy and paste. They also tell you to download the API libraries. I did. The library has what looks like a common library directory with 10 or so DLLs (and various other files) and a separate folder with 45 folders for various "Services" and inside those more DLLs. And I am somehow supposed to know which DLLs this 20 line piece of source code needs??? So to be safe I end up including all the common libraries and both the libraries under the "DriveService". But the code won't compile. All the references are satisfied but now there's ambiguity between because an extension method is defined in two separate imported DLLs. It takes me another 20 minutes to figure out which one I don't need. Why do we have to go through all this??? It is all so utterly needless. I can't tell you how many times I've been unable to try out a piece of source code because something has gone wrong in figuring out and finding the libraries that were needed, and which versions of the libraries were needed (since libraries can radically change with every release).
What boggles the mind is that neither Visual Studio nor third-party VS plugins like ReSharper do anything to help. Surely something could be done to largely eliminate this problem! At the vary least, why couldn't they include a "header" like region at the top of the VS editor UI which lists the actual fulfilled references for that active file. It wouldn't actually be part of the source code, it would just be a handy little (perhaps collapsed) virtual piece of commented code that would be copied whenever you ctrl+a ctrl+c file contents. And when you pasted it elsewhere it would let people know what out-of-band files they were missing. The format would probably just specify the Portable Executable data for the file and the hash (not the actual path which would be less useful and less anonymous).
Here's hoping they do it one day, or someone makes a nice little third-party plugin that is able to sort it out for you (by having a massive DB of exported library functions and some good heuristics)...
All certificate signing agencies basically do the same thing, they provide a means by which a user browsing a site or using a piece of software can know who is operating the site or writing the software. Code signing (and signing in general) is a wonderful thing. I fully believe in it. But you don't need these centralized commercial entities to provide it. And I'm just not convinced of the value add of signing authorities which charge a lot of money to (in my view) add only a thin veneer of security.
The vast majority of those applying for certificates are surely entirely legitimate and provide entirely legitimate details. That means that the vast majority of certificates signing authorities give out are entirely valid. But that's not proof that the system is good. Surely the effectiveness of security is determined not by those who intend to stay within the law but those intend to violate it. Airport security is not good because it finds no bombs on lawful people, it is only good if it is able to find bombs on unlawful people. Certificate agencies' version of due diligence is laughable, they generally require nothing more than emailed (or faxed) images of desired documents. Could someone submit easily photoshopped documents to a signing authority and have their credentials "validated" such that they get a signing certificate? Yes, it's been done. And even if the certified owner was valid at the time the certificate was issued, the security provided to end users (those looking are supposed to rely upon the certificates) is fleeting at best; the certificate owner can always move, disconnect their phones, or give the certificate to others.
Now in no way are these signing authorities radically different than the purveyors of other more traditional security products. It's true that we put locks on our houses and secretly know they would keep out only the laziest or stupidest of criminals (lock picking being an easily acquired skill and glass being easily broken), but signing certificates have the potential to be so much better. The fact that they are not, and that they cost so much for not being so superior to self-signed certificates, frustrates me. I just wish signing authorities would either do more (require you show up face-to-face at an office with a passport to be fingerprinted and DNA mapped) or do less (acknowledge how easily they may be deceived and not make you jump through hoops to proffer false 'proof').
But here we are in the land of is... God bless the industry of false security.
The free, multi-party video conferencing offering Google+ Hangout is a pretty fantastic alternative to Skype (and its paid multi-party option). Wouldn't it be wonderful to have a UI control you could drop into any .NET application that gave you all the power of Google+ Hangout? Well, it would... and I've been working on it, but so far it's not proved easy.
First a bit of back story. I have been working on an app which features embedded video conferencing and had gone initially with Skype. Skype has been a somewhat miserable experience thus far, workable but only just. The only way I've been able to integrate with Skype thus far has been their Skype4COM option. Skype4COM allows you to remote control certain features of Skype from a third-party application. You can initiate calls, hangup, mute, and things but you can't hide the original Skype interface or embed its video in your own application. There is a way to do all that, and it's SkypeKit. But for reasons unknown to me they seem to have suspended SkypeKit access. I applied to the program many months ago and my account still says something like, "We'll get back to you about SkypeKit when we're ready for you." I've heard from others that that's just the way it is right now, that they are redoing SkypeKit or something. At any rate... Skype isn't a great solution at the moment. Google+ Hangout on the other hand would be perfect, if only it worked.
I spent a few days a few weeks ago trying to create a Windows control that would let me embed Google+ Hangout inside a Windows control. The logical approach to do that would be to customize a web browser control to load up the web-based Google+ Hangout and just modify the rendered content and inject JS as necessary to achieve the desired control-ifying of Hangout. I've done that before, so I didn't think it would prove so tricky.
Microsoft WebBrowser Control
I first tried using the built-in Microsoft WebBrowser control as the hosting control. I automated Google account sign-in and had it load up the Google+ Hangout page. I hit the first major roadblock. The page gave me a warning about my browser agent not being supported. I went back and added code to spoof it, but that didn't seem to work as the WebBrowser control isn't all that sophisticated and only spoofs the user agent for the first request, not subsequent ones or ones that the loaded page fetches. I tried several alternative WebBrowser extension classes that try to intercept navigating requests and replace them with navigate calls that include the spoofing, but they didn't seem to work properly. If memory serves I did reach a point where I was able to call the JS to start a hangout but everything hung when it tried to install/start the hangout.
The next option I tried was Awesomium, a Chromium-based behind the scenes browser rendering system. After looking at some of their examples and struggling a bit with their concept (which differed radically from the WebBrowser control and MozNET control approaches I was used to) I ended up realizing I could use one of their demo apps as a quick way to test the concept. They had a tabbed web browser demo which I used to access Hangout. I was able to initiate a Hangout but the video of the Hangout was not contained as it should have been within the Awesomium demo browser, the Hangout window was at the top left of the screen whereas the browser was in the middle. So it worked but if they couldn't control where it was rendering then I didn't think an Awesomium would be an easy solution.
MozNET / Xulrunner
Next I tried my old friend, MozNET. MozNET is a XULrunner implementation which I've quite enjoyed using before. There again I wen the easy route first and used a demo browser example to see if I could get it working. Sadly it did not work. It would just hang at the step where Hangout is checking for its plugin. I feel like a MozNET solution wouldn't be too hard to achieve, but I don't have the depth of knowledge in it to make it happen easily. I know MozNET can be made to work with various XPI-based plugins.
Oddly enough, Google+ Hangout doesn't seem to be an XPI plugin. I did a procmon.exe dump of a Firefox when using Hangout and I see access to:
And a separate EXE gets launched:
C:\Users\foo\AppData\Local\Google\Google Talk Plugin\googletalkplugin.exe
But I'm not sure how what is doing the communication between Firefox and the Hangout code.
If anyone has any thoughts they'd like to share, please let me know! I think the world would benefit from an embeddable Google+ Hangout control... I know I would.
When Kindles (and all the other e-reader marketplaces) came into the world one of the big selling points was that books would now be cheaper! And how could they not be cheaper, there was no physical book to manufacture or ship! All the various e-reader marketplaces do showcase many lower priced books, but more and more I'm seeing the Kindle version of books priced much higher than the physical books (BOTH hardcover and paperback)!
Take this recent example, the paperless Kindle version of "The Art of Innovation" is 13% more expensive than the hardcover and 70% more than the paperback!
The original logic of "it costs less to publish an ebook so we'll charge less and the consumer will then buy more e-books" has now given way to "let's charge the consumer more for the convenience of an e-book, bank the extra profit, and who cares if they buy more e-books". I'm not saying publishers can't do what they like, shouldn't do what they like, I'm just a little tired of being white lied to. If the ultimate goal is to screw us into paying more for books, don't butter us up and suggest the future will be the complete opposite.
A few times a year I run into situations where an application, a driver, or something effectively locks me out of my computer. After trying various remedies I am ultimately forced to do a hard power down of the computer. I cringe every time I am forced to take that action, praying I don't end up with corrupted files.
Today I had enough. I went to shutdown my laptop and head out the door to go get a working lunch only to have my computer log me out and show me Acronis True Image's dreaded, "Operations are in progress. Please wait. The machine will be turned off automatically after the operations are complete." That is Acronis True Image's way of saying, "We're not going to shut down until a backup or backup verification finishes." The problem is those operations can take hours, and nine times out of ten the message is bogus, indicating not something in progress but a job that's hung. Today's case was one such example that would have left me waiting forever; the backup drive was disconnected, so Acronis True Image could not have been doing anything at all. When this message is displayed there's no normal way to force a shutdown other than forcing a power off with the power button. There is no ability to log in locally, no ability to log in remotely via RDP, no ability to use System Internal's remote tools (I am not sure if the reasons relate to permissions or not, I've not adequately investigated). So, today I decided to put in a back door which will save me in such situations.
Schedule a Task to Periodically Run a Remotely Editable Batch File
In all the cases where these sorts of things have happened I've noticed that I can still remotely access the computer's file system just fine. This got me to thinking I could use that as a vector for forcing Windows to execute some code to force the shutdown. To that end I created a shared folder on the laptop called "backdoor", made sure permissions allow only myself the privilege of editing its files, and created a single batch file inside it called backdoor.bat. I then set up a task in Windows Task Scheduler to execute that batch file as administrator (UAC) every 5 minutes from now until forever. When not needed the batch file is effectively empty, just a couple of commented out batch commands. If I find myself locked out I can populate the file with whatever executable commands might be appropriate to force the shutdown (e.g., System Internals' pslist, pskill, psshutdown).
Since setting this up a month ago I've already had two occasions where this method saved me and allowed me to shutdown my computer gracefully!
For anyone curious, the commands I put in the backdoor.bat file are:
C:\systeminternals\pslist -accepteula > pslist.txt
C:\systeminternals\pskill -accepteula trueimagehomeservice
C:\systeminternals\pskill -accepteula trueimagehomenotify
Those lines are commented out until and unless I need them. The first line lets me grab a snapshot of the running processes and put them in a text file I can read, very useful if the system still doesn't shut down. Since my task will only run every 5 minutes if the first attempt doesn't shut things down I've got several minutes to review the process list and find other processes to try and kill. The last two lines kill the processes that are typically hanging my shutdowns (I haven't bothered to check which of the two processes is the problem, so I just list both.)
Initially I tried to just use a more generic approach and force a shutdown ("psshutdown -accepteula -r -f -t 60") but I could never get this method to work, it didn't ever seem to kill the jobs that were hanging things up.
Since setting this up I've needed to use it a dozen times or more, saving me almost as many hard resets. The most frequent situation in which I need to use it has been when Stardock's Multiplicity prevents my keyboard and mouse from being used and when Acronis' True Image prevents shutdown (see above).
Multiplicity is a fantastic app that lets your mouse and keyboard seamlessly switch between different computers as though they were just extra monitors on the one computer. It is brilliant software, but has had a hugely serious bug in it for all the years as I've used it. If Multiplicity gave focus to another computer and that computer went offline (network outage, sleep/shutdown, software crash) it won't let you regain the use of your primary computer. Whatever timeout logic should restore your ability to use your primary computer fails the vast majority of the time and you are locked out of your own computer, unable to send commands to it. My backdoor trick lets me kill off Multiplicity and regain access.
I couldn't help but be a little intrigued by all Raspberry Pi hype. A computer smaller than a deck of playing cards, able to run Linux/ChromeOs/etc. and costing only $25-35 (depending on the model), sure sounded interesting. There are no end to computing projects I have in mind to undertake, so this seemed the perfect platform for them, particularly when the Raspberry Pi community is so friendly and supportive.
Well, having had my Raspberry Pi (model 2) for a week now I can certainly say that it's cool alright, but I'm increasingly convinced that its use in the desktop-related computing projects I had in mind is severely limited. The official Raspberry Pi Debian release runs, and includes a resource friendly web browser and other resource friendly apps, but attempting to run anything else is painful. One project I am working on uses JonDo, the magnificent privacy proxy, so I tried to see if the JonDo client would work with Raspberry Pi. It does install, and run, but it is painfully slow as to be utterly unusable (perhaps because of the Java overhead or perhaps because of the encryption demands). So much for that.
The thing I love most about the Raspberry Pi so far has less to do with it and more to do with the discontinued Motorola Lapdock. A couple years ago some people at Motorola and elsewhere thought that what people really wanted was a way to use their phone as a laptop and I remember all the hype surrounding the "lapdock" which would let you do just that. Unfortunately, at a price of $500 people really didn't want it, opting instead for cheaper $250 netbooks and $250-600 iOS/Android tablets. Sad for Motorola but great for anyone now because these over-produced lapdocks have been hitting the deep discount sales sites for the last year or so, currently selling them for $49! What you get for $49 is a fabulously elegantly, slim 10" display with keyboard, touch pad, and built-in rechargeable battery back! I seriously know of no better tech deal ever! Now, the cool part is that rather than use some proprietary connectors the lapdock uses separate micro HDMI and micro USB connections, and being universal standards you can connect a Raspberry Pi or anything else you want up to these connectors! I bought a second Motorola Lapdock to use as part of my emergency computer repair tool kit, with this thing and a few cables I've got a mobile keyboard/mouse/monitor I can hook up to any down server or computer with questionable peripherals.
In the case of Raspberry Pi this means that for $49 (Motorola Lapdock) + $35 (Raspberry Pi model 2) + $10 (cost of cables) you have a $94 laptop. Admittedly it's a pretty underwhelming laptop in a field where vastly more powerful laptops can be had for just over $200, but still... If you're buying a Raspberry Pi for anything other than experimenting then you're doing it wrong.
Watch the video above to learn what cables you need and how to modify them; the girl in the video throws me off a bit, I think it's the Ferdinand the Bull nose ring and reddish hair. Also, check out this cool modification to learn how to add a super capacitor to your Raspberry Pi as a great little backup battery/brownout protector (which is particularly useful with the lapdock).
If you used your Windows 8 Upgrade media to install a clean copy of Windows you've probably discovered by now that Windows 8 won't activate, telling you that your key is for upgrade and not clean install. Don't fret, there is a simple solution which does not require you pointlessly installing an old copy of XP, Vista, or Windows 7!
The easy three-step solution is:
- Modify the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\OOBE and set the MediaBootInstall value to 0 (zero).
- Open an elevated command-prompt (run command as admin) and execute this command: "slmgr -rearm"
I'm not sharing this tip as a way to cheat Microsoft out of a dollar, I'm sharing it because anyone experienced enough to be installing a copy of Windows 8 on a clean hard drive has surely owned enough Microsoft computers over the years to legitimately qualify for the upgrade. With Windows XP through Windows 7 qualifying I know in the last 12 years I've owned and still have legal rights to at least 10 - 15 installations (mostly from retired computers).
This hardly needs to be said, as it's been said a million times before, but as it's been my personal experience and frustration for the last few days I can't help but re-iterate the points myself... For all its awesomeness Linux is extremely, profoundly, mind-bogglingly difficult when it comes to installing the things you need. Case in point, over the last few weeks I've needed to install a VPN client on several different real and virtual machines running different flavors of Linux, namely Ubuntu, Debian, Scientific Linux, and CentOS 6. My ultimate success rate was only 50% with me ultimately abandoning the attempt in the other cases after too many hours wasted; I think I spent about 10 hours in all, trying to install VPN on the four systems. This relatively simple task was made incredibly complicated by the process being similar but seriously different for every flavor of Linux involved.
The basic procedure starts simply enough with needing to install the OpenVPN package. But wait, with various flavors of Linux come various package management systems you need to know, from RPM and Yum to Deb and Apt. And once you know the right command-lines the task becomes immediately complicated by the fact that OpenVPN depends on several libraries which may or may not be available in the repositories to which your Linux of flavor automatically connects. It invariably takes some time working out which repository has the needed libraries, some time wondering about the legitimacy of that repository, some worry that the package isn't entirely suitable for the flavor of Linux you're on, and the configuration changes needed to actually cause Linux to look at that repository. With some flavors of Linux this went relatively smoothly and with others not so much. Eventually I would in each case get the OpenVPN client package and its dependencies installed.
Say whatever negative thing you like about Microsoft Windows, but the install experience on Microsoft Windows would have involved at worst picking x86 or x64 versions and possibly selecting between Windows XP / 2000 and Windows Vista / 7 / 8 versions. Everything you needed would be included in the installer.
And here's where it gets even worse with Linux. As I quickly discovered, the ubiquitous Network Manager applet (akin to the wifi/network icon and applet in the Windows system tray) that's featured in all modern Linux task bars, the applet that makes adding / configuring and connecting to VPN servers quick and easy, still had its Add and Import buttons unhelpfully grayed out. After quite a bit of confusion and much Googling I discovered that for those features to be usable in the Network Manager applet several additional packages (acting as plugins) specific to Network Manager had to be installed allowing it to support OpenVPN. This was not something one would naturally expect, as the VPN tab was already present in the Network Manager applet giving no hint that something was left to be installed. And it's here where I was only partially successful across the various flavors of Linux. With two of the flavors I just couldn't find the appropriate dependencies (of the Network Manager plugins) to get the job done; I found things but they didn't work, were for CentOS 5 when I needed them for Cent OS 6, etc.
And even where I was fully successful on two of the systems the VPN wouldn't connect until a reboot, which I would have been happy doing had the cryptic error I was receiving indicated that might be useful. More Googling required to learn that. And in another case where I came close to getting things working the VPN manager would let me add VPN connections only to then make them unavailable for connection selection, leaving me with no idea why it wasn't working or what to do about it.
Say what you will about Microsoft Windows but there is never a separate installation step required to enable a driver's/software's GUI.
And so it is my profound and lingering frustration that something as miraculously wonderful as Linux continues to be hobbled by user experience which requires vastly more time, patience, intelligence, and dedication than most users will ever be willing to provide. While I understand that the various flavors of Linux are very much a part of its success and ubiquity in everything from web servers to embedded devices in cars to Android tablets, I can't help but wish the desktop Linux space wasn't so fragmented, that putting together a working Linux machine and all its needed packages wasn't so g-d damn much like assembling a jigsaw puzzle. When I think of all that Linux does right, all its hardware support, all its ported software, all its UI options, why oh why can't these relatively basic issues be sorted out?
Ah well... I can dream.
I have owned the Viliv S10 Blade, a Windows-based 10" convertible tablet, for a few years and until now it was a device desperately looking for a suitable operating system. Windows 7 came installed on the S10, but its bloat, overhead, and lack of touch friendly interface made the Viliv S10 no more useful than a ruefully overpriced bargain basement netbook. Flash forward several years and the world has come to embrace the tablet, and Microsoft has re-imagined its operating system with a finger-driven touch interface in mind. I was eager to see if Windows 8 could finally make my Viliv S10 what it always should have been. The good news is that the Windows 8 experience on the Viliv is quite a bit better than the Windows 7 experience; the bad news is that the device is still too laggy (CPU too slow and memory too low) and unable to deliver the fluid, effortless experience you've come to expect from even the lowliest Android or Apple tablet. Nonetheless, if you've got a Viliv S10 you'd be a fool not to squeeze a better experience out of the convertible tablet you already own.
I am hoping to save you the pain I experienced trying to get Windows 8 installed on the Viliv S10 Blade, so read on!
Installing Windows 8
The Viliv S10 Blade has no CD/DVD-ROM drive so you will need to either do a download-based installation of Windows 8 or you'll need to copy the contents of the Windows 8 DVD onto the Viliv (the DVD contents is approximately 2.8 GB) from a DVD drive shared from another computer or via a USB memory stick or SD card. When you're ready, begin the install.
The first thing you'll need to decide is what type of install you'll do, will you keep your user data or your user data and applications/settings. In the ideal world you would want to keep your applications/settings but I tried repeatedly to do an in-place upgrade keeping all my applications and settings (as well as user data) and was unsuccessful. During each install it would hang during the "Getting Devices Ready" step, hanging at 81% (I left it there for 23 hours on one install). After each failure it restores your computer to its pre-install state. I tried uninstalling various software, removing various drivers, and disabling various services within Windows 7 before restarting the Windows 8 install and nothing made a difference; the installation wouldn't get beyond "getting devices ready". Ultimately I chose the option which kept only my user data and the install completed successfully. If your install behaves as mine did you will need to also try the option of keeping only user data.
Once the installation is done you will discover that you have no Internet connection. Do not attempt to turn on the wifi device with the Fn + F2 key combination. Proceed to the next section.
Calibrate the Screen
You will likely find on install that the touch screen is uselessly mis-calibrated. Fortunately the fix is easy, just use the touch pad to go to the Control Panel and do a search for "calibrate" and then do the touch screen calibration. Your touch screen will now work properly.
Three things prevent your wifi from working after the Windows 8 install. 1) Your wifi module is off (and thus Windows doesn't detect it), 2) No suitable drivers are included with the Windows 8 install files, and 3) the available Windows 7 wifi driver will not work without a "patch".
Step 1: Turn on your wifi module.
Press Fn + F2. You can verify in Windows Device Manager that the device is no on, it will appear as an unknown device.
Step 2: Download Necessary Files
By way of this post I found the trick to getting wifi working. A Viliv S7 owner shared the necessary files and his description of the solution (written in Korean).
Go to his page (on another computer) and download the following files: s7_fix_.zip, Wifi_Driver.zip, and Add_Take_Ownership.reg; do a keyword search on the page and you will find the links to the files. Copy these files to your Viliv via SD card, USB stick, etc.
Step 3: Execute Add_Take_Ownership.reg
Double click the registry key file Add_Take_Ownership.reg to merge it into the registry. It will create a new item called "Take Ownership" when you right click a file or folder in Explorer. This will give your user access to that file or folder. You will need this.
Step 4: Install Wifi_Driver.zip
Unpack the Wifi_Driver.zip then go into the Device Manager. On the Marvell and choose "Update Driver Software..." when prompted in the device installation point to that folder.
Step 5: Apply the Patch
Go into Explorer and right click the C:\Windows\System32\Drivers folder. Choose the Take Ownership option from the context menu. With that done, unzip the S7_fix_.zip file you downloaded and copy the contents of it into C:\Windows\System32\Drivers (overwriting the files already in that folder). You may want to make a backup copy of the affected files, just in case you want to restore your machine to its original state.
Step 6: Enjoy Your Wifi!
Your wifi should now work! If it doesn't, try a reboot.
Installing Graphics Driver
The default Windows 8 install uses a generic Windows graphics driver for the Viliv which lacks the graphic acceleration and screen resolution options of the Intel GMA 500 graphics card in the Viliv S10. It is a very good idea to install this official driver from Intel: Intel GMA 500 driver 18.104.22.1680 09/16/2010 .
To install you need to unzip the download to a folder and set the compatibility mode of "Windows 7" before running the Setup.exe. The install will then proceed normally.
Installing Additional Viliv Software / Drivers
Though none are necessary, you may want to install additional Viliv-specific drivers. In general Windows 7 drivers are compatible with Windows 8, so this official source of Windows 7 Viliv S10 drivers is the place to download them.
I've been running Windows 8 on my Viliv S10 Blade for a couple of weeks now and the experience has been mixed. Part of the blame can be placed on Windows 8 which is a curious hybrid operating system, trying to be both entirely touch and mouse friendly while being exclusively neither. You are routinely forced to use apps of both flavors to perform tasks, Windows having provided their new UI approach for only a small subset of routine OS and administrative tasks. The largest frustration with the Viliv and Windows 8 is the lackluster performance, most of the new Windows Store delivered apps work quite well but only if the operating system isn't doing something at the time, and in-app actions like loading resources can make the experience painfully laggy. I suspect if the Viliv had an additional gigabyte of RAM the experience would have been dramatically improved. Still, compared to my absolutely miserable experience of the Viliv with Windows 7 I am at least pleased that my Viliv now once again has a purpose in life. Hope you find renewed pleasure in yours as well.