Peace For All

March 13, 2013

C# Feature Need: Lambdas having an implicit reference to themselves

Filed under: C#, Programming — Tags: , , , — Devlin Bentley @ 1:07 pm

I really want a “me” keyword in C# that gives a reference to the current lambda.

For event handling it is often useful to have a one time event that detaches itself after it has fired. Currently this isn’t too hard, you assign said event to a nice Func<T…> or Action<T…> (or actual event handler delegate type) and when you reference it in your lambda (while unassigning it) it gets captured in your closure.

An example of this is:

EventHandlerType handler = null;
handler = (string receivedString) =>
        this._someString = receivedString;
        EventClass.Event -= handler;
EventClass.Event += handler;

As you can see above, handler is my event handler, it takes in a lone string (because event args is for chumps), assigns said string to a class member variable, and then detaches itself.

This isn’t horrible, but it still is a fair pain in the arse. I’d presuming the lambda already has a reference to itself somewhere, so my creating an explicit one seems sort of redundant. Also it is an unneeded closure, quite frequently that is the only variable I am closing over, which means I have a fairly sizable overhead just to capture a reference to myself!

On a separate note, I wonder if declaring your handlers as class members optimizes this in any way. I am not 100% sure if they are captured over, I should read up on it to see if I can find clarification. Thinking about it some more, there may be times when there is a need to capture them, but if they are public members this might not be needed. I am now wondering if the C# compiler is smart enough to optimize this away.

Anyway, none of that would matter if C# had a keyword that said “give me a reference to the bloody function I am in right now!”

And hey, type inference means the syntax could be really nice! ūüôā

(And if there is already a way to do this, that doesn’t involve gobs of reflection code, please do tell!)

Now this really becomes a pain when you are trying to chain event handlers together. I have some annoying lock step code I need to write where I do a call-handle-call-handle over a network channel. Each message I get is of the same type (so it flows through the same event), but the handler has to be different each time.

Now obviously I could make one giant lambda that tracks its state and how many times it has responded to messages, but I much prefer simpler lambdas that do exactly one thing. Thus I am constantly unassigning and reassigning handlers to the same event. My code would be a lot cleaner if I didn’t have to predeclare all my handlers.

(Of course this code is dealing with an impedance mismatch between an non-OO system and my OO system, so the code is going to be somewhat ugly, but I prefer to minimize this as much as possible!)

May 2, 2012

Switcher, an awesome alt-tab replacement, with search!

Filed under: Life in general, technology — Tags: , , , — Devlin Bentley @ 12:11 pm

I was needing an alt-tab replacement that allowed me to search open windows (yes I have that many windows open!), and after a few minutes of searching I found the amazing utility Switcher. The animations are a bit slow, but you can turn them off and have a really rapid alt-tab replacement utility that allows for search! Search is amazing, I have 20 windows open right now, alt-tabbing through them is generally a pain, but I type at 120WPM, so searching is faster than using my mouse of having to hit “alt-tab”, do a visual check of which app is selected, rinse, wash, repeat.

My only complaint is that when using multiple monitors, which monitor search results show up on seems fairly arbitrary. It also seems to split across screens, but it would be nice if there was a way to tell it to stick to one screen or the other.

But those are minor complaints compared to the amount of time and frustration I am saving with it!

April 20, 2012

How Microsoft can take over the High End Gaming Keyboard market

The picture below is of the Microsoft Sidewinder X6, a largely forgotten gaming keyboard from Microsoft.


It was, and still is, close to being the best gaming keyboard ever made. Why?

  • A swappable numeric keypad that can be turned into a macro pad means that MMO players are happy.
  • Convenient macro keys close to the WASD cluster so FPS players can have their fun as well
  • It has a red backlight that does not ruin your night vision, it also looks less tacky then blue backlights which are starting to have a backlash against their overuse.
    • The backlight’s brightness can be easily adjusted through the left knob up on top. This makes it really simple to just twist the knob and turn off the backlight before going to bed. No strange key combination to remember.
  • There is a volume control knob, for lighting quick changes in volume level, no pounding on the Vol- key while your ears are getting blasted.
  • A full set of media playback keys, meaning there are no strange hotkeys or function+key combinations to remember.

Now, that said, this keyboard is not perfect. It does not have N-Key Rollover. which is very very unfortunate. The keyboard that came after it, the Sidewinder X4, has amazing NKRO and red backlighting, but is otherwise a very utilitarian keyboard. This fits its role as a low cost gaming keyboard, but it entered into a very crowded market and it didn’t really taken the world by storm.

The other problem with the X6 is that the high end gaming keyboard market has moved on. The current big thing is Cherry MX switches of various types. Right now only a few manufactures are making gaming keyboards with Cherry MX switches, and with the exception of¬†Corsair’s Vengeance series, all the Cherry MX gaming keyboards are fairly spartan in their feature offerings. Many of them do not even have media control keys and the vast majority have the same styling as regular cheap PC pack-in keyboards.

I believe that when you take into consideration all these factors (Microsoft’s excellent design work on the X6 and the lack of real competitors in this product space) that Microsoft is in a great position to enter into and dominate the market for high end gaming keyboards.

How? Quite simple: Release an updated version of the Sidewinder X6 with NKRO that uses Cherry MX switches. Offer it in two SKUs, one with Brown switches and one with Red. (The CherryMX brown sku could even have a limited production run, but it would serve the purpose of getting excellent press amongst enthusiasts.)

This would immediately place Microsoft’s offering at the top of the pack for Cherry MX gaming keyboards by offering more features than any other gaming keyboard of comparable quality. The X6’s design was already great, and re-released and updated it has the potential to be the best gaming keyboard sold by anyone.

The second aspect of this is doing a proper marketing campaign. Thankfully there are so few CherryMX gaming keyboards out on the market right now that getting reviewers to take a look at your product is comparably easy, as is building up a good grassroots base on forums. If MS sets out full throttle on both paths, top down and ground up, a new CherryMX X6 should be well received by a community that eagerly awaits the latest high quality products.

October 18, 2011

Beware of the Current Directory when using PowerShell Remoting!

Filed under: PowerShell, Programming, technology — Tags: , , , , — Devlin Bentley @ 2:04 pm

Are your files appearing in strange places? Or maybe not appearing at all? Does everything work when run locally, but when remoting all of a sudden things work a bit differently?

Be aware that when using PowerShell remoting that your working directory may not be what you expect!

Create a simple test script that writes out the value of Get-Location to a log file at an absolute path. Run this script remotely to figure out what your actual default location is!

At the top of your scripts it may be a good idea to use Set-Location to make sure your current working directory is what you think it is. This is especially true if you try to access files relative to your script’s location. (This is good advice anyway!)

Also note that PowerShell tracks its current working directory differently than Windows does. A really good explanation of this exists at


September 25, 2011

PowerShell Call Operator (&): Using an array of parameters to solve all your quoting problems

Filed under: Life in general, PowerShell, Programming — Tags: , — Devlin Bentley @ 7:30 am

I would like to thank James Brundage (blog!) for telling me about this. Suffice to say, the man is seriously into automation.

Alright, if you just want to learn about using arrays of parameters with the call operator (&) and skip all the explanation of what doesn’t work, scroll down to the bottom. I am a big believer in understanding solutions though, so this post will detail everything that doesn’t work and slowly build up towards what does work.

The last blog post I did on this topic was about using Invoke-Expression to solve problems with passing parameters to external programs. I resorted to using Invoke-Expression since (as an undocumented side effect?) Invoke-Expression will strip off quotes from parameters to commands it executes. But in some circles using Invoke-Expression to execute programs is considered heresy. It is thanks to James Brundage that I was able to figure out how to better use & and also come to a greater conscious realization of how PowerShell handles strings.

To summarize the problem, try to get the following to run in PowerShell

$pingopts = " -n 5"
ping $pingopts

If you run this command ping will spit out an error, the root cause of the problem is that PowerShell passes $pingopts to ping with the quotes still on it, so the above line is the same as typing

ping “ -n 5”

Which is obviously quite wrong.

The next obvious solution is to use the call operator, “&”. The call operator is how you tell PowerShell to basically act as if you had just typed whatever follows into the command line. It is like a little slice of ‘>’ in your script.

Now the call operator takes the first parameter passed to it and uses Get-Command to try to find out what needs to be done. Without going into details about Get-Command, this means the first parameter to the call operator must be only the command that is to be run, not including parameters. The people over at explain it really well.

With all this in mind, let us try the following

$pingopts = " -n 5"
&ping $pingopts

Run that and you will get the exact same error. Fun!

Why is this happening?

The problem is that & does not dequote strings that have spaces in them.

So this code works:

$pingopts = ""
&ping $pingopts

Where as

$pingopts = ""
&ping $pingopts

will not.

But if we think about this for a minute, we already know¬†about this behavior. Heck we expect it and rely on it. It is so ingrained into how we use PowerShell that we don’t even think about it, except for when we run head first into it. So now let us explicitly discuss PowerShell’s handling of strings.

String Quoting Logic

The string auto quoting and dequoting¬†logic is designed around passing paths around. The rule, as demonstrated above, is quite simple. A string with a space in it gets quoted when passed to something outside of PoSH, while a string without spaces in it has its quotes stripped away. This logic basically assumes if you have a space, you are dealing with a path and you need quotes. If you don’t have a space, you are either dealing with a path¬†that doesn’t need quotes, or are passing something around that isn’t a path and you do not want quotes.¬†For those scenarios¬†PowerShell gives exactly the results people want, which just so happen to be the results people need 95% of the time.

Problems arise  when you have strings with spaces in them that you do not want quoted after leaving the confines of PowerShell. Bypassing the string quoting/dequoting logic is not easy and you can end up resorting to Invoke-Expression hacks like I detailed earlier or you can try to find a way to work within the system. The latter is obviously preferable.

The Solution

You may have already guessed the solution from the title of this blog post: Pass an array of parameters to the call operator. Given the sparse documentation available online for & (it would be nice if it said string[] somewhere), one has to have a fairly good understanding of Powershell to figure this out on their own, or just randomly try passing an array to &.

The key here is working the system: by passing parameters in an array you can avoid having spaces in your quoted strings. Where you would normally put a space, you break off and create a separate array element. This is still a bit of a work around, it would be optimal to find a way to tell & to dequote strings, but this solution does work.


$pingopts = @("", "-n", 5)
&ping $pingopts

Again, notice instead of “-n 5”, I split it into two array elements.

Just for reference, here is how you would build that command up line by line using an array:

$pingopts = @()
$pingopts += ""
$pingopts += "-n"
$pingopts += 5
&ping $pingopts

This actually is not much different from constructing 3 separate variables and passing them in after ping:

$param1 = ""
$param2 = "-n"
$param3 = 5
&ping $param1 $param2 $param3

Which is the blatantly obvious solution but also the ugly one so I never even considered it. Of course using arrays is more flexible since you can declare at top and slowly build up your command line throughout your script.

Hopefully this saves everyone some time and the journey has helped you understand a bit more about Powershell.

September 22, 2011

Remotely Executing Commands in PowerShell using C#

Filed under: C#, PowerShell, Programming — Tags: , , — Devlin Bentley @ 4:02 pm

At first glance, this seems like it should be easy. After all: remoting using PowerShell is dirt easy! Once the server is configured you are a single Invoke-Command away from remoting bliss! So how hard could it be in C#? Surely there is some functional equivalent to Invoke-Command that gets exposed, right?

Well no, it is not that simple. It isn’t too bad, but the lack of any examples on MSDN makes some aspects of doing¬†this quite literally impossible to figure out if you are using MSDN documentation alone.

Thankfully by piecing together posts from the MSDN Forums it is possible to get something working. Having done just that, I figure I’d save you all the time and effort.


Get a reference to System.Management.Automation into your project. The best way to do this is to manually edit the csproj file and add the line

<Reference Include=”System.Management.Automation” />

Yes this is ghetto. I am not sure why that library is not sitting with the rest of them in the proper location that would get it listed in the “Add References” dialog.


Calling in from C# code, execute PowerShell  code on a remote machine.


A bunch of poorly documented object constructors.

The key to getting this all working is to properly construct an instance of WSManConnectionInfo and pass that on to the RunspaceFactory.CreateRunspace(..) method.

Step 1: Constructing a PSCredential

This step is pretty straight forward. PSCredential has only one constructor, it takes in a username and a SecureString password. The only gotcha is that userName includes domain if applicable.

Good luck on the SecureString PW part. Doing it properly (e.g. never storing your PW in a string at any step) can take some planning ahead, depending on your situation of course.

PSCredential remoteMachineCredentials = new PSCredential(domainAndUserName, securePassword);

 Step 2: Constructing WSManConnectionInfo

This is the hard part. Mostly because the documentation on how to use this class is so poor, most of it consisting of how to establish a loopback connection to localhost.

With our particular goal in mind, we only really care about one of the overloads:

public WSManConnectionInfo (
bool useSsl,
string computerName,
 int port,
string appName,
string shellUri,
PSCredential credential

The documentation for this constructor (located at if you want to read it yourself) has such gems as


The application end point to connect to.

Not very useful.

Lets go over these parameters one by one and look at what each one means.

useSsl – Pretty simple, a bool flag indicating if SSL should be used. Highly recommended for use, of course. Note that changing this changes what port number you will be using later on

computerName – The name of the machine you are connecting to. On local networks, this is just the machine name.

port РThanks to the blog post at we know what port numbers PowerShell remoting uses. 5985 if you are not using SSL, and 5986 if you are using SSL.

appName – This should be “/wsman”. I don’t know what it is or what other values you can use here, but thanks to the Emeka¬†over on the MSDN forums (thread:¬†we know that “/wsman” will get it working.

shellUri – Again, thanks to Emeka we know that this needs to be;. I am not sure what other values are acceptable, but that value does indeed work.

credential – Finally we come back to the PSCredential object we constructed earlier. Simple enough.

Step 3: Creating a Runspace

We thankfully return back to documented things. After constructing the WSManConnectionInfo just pass that into RunspaceFactory.CreateRunspace(…) and be done with it.

Step 4: Putting it all together

string shellUri = "";
PSCredential remoteCredential = new PSCredential(domainAndUsername, securePW);
WSManConnectionInfo connectionInfo = new WSManConectionInfo(false, remoteMachineName, 5985, "/wsman", shellUri, remoteCredential)

using ( Runspace runspace = RunspaceFactory.CreateRunspace(connectionInfo) )



Pipeline pipeline = runspace.CreatePipeline("<COMMAND TO EXECUTE REMOTELY>");

var results = pipeline.Invoke();


All in all, not too bad. The sad part it, in terms of figuring out what to do it is easier to create a local runspace and use Invoke-Command on it, but once you know what you are doing, it is only one extra line of code to execute scripts remotely!

September 21, 2011

PowerShell Tip: Building a command line for execution

Filed under: PowerShell, Programming — Tags: , , , — Devlin Bentley @ 4:59 pm

Update: This still works best for some scenarios, but learn about how to use the call operator to do this and work within the system!

So you want to build up a command line to execute some utility. Simple enough right? Lets use ping as an example. Say you want to ping 5 times. Your parameters would look like -n 5. Append this to the end of ping in PowerShell and off you go, right?

$pingopts = " -n 5"
ping $pingopts

Run that in PowerShell and you will hit a small problem. Ping will give you an error saying about not being able to find the host -n 5.

If you examine the command line that is used to execute ping (pick your favorite tool, I chose process monitor!) what is happening becomes quite clear. The command that was executed was this:

"C:\Windows\system32\PING.EXE" " -n 5"

The problem is that the string quotes around $pingopts were kept in place. While keeping quotes is useful when passing paths around it is not what you want in most other circumstances. We need to make those quotes go away and thankfully you can use invoke-expression to do just that.

$pingopts = " -n 5"
invoke-expression "ping $pingopts"

This code will work perfectly! If you know of any other solutions please post about them, I am sure there are many ways to solve this problem in PowerShell!

In summary: Generally when working with Powershell Cmdlets you don’t have to worry about strings, PowerShell¬†and its cmdlets handle quoting and dequoting them them perfectly, but you have to be careful when interfacing with the non-PowersShell world.

September 9, 2011

How To Properly Get The Drive Letter of a Mounted VHD in PowerShell

Filed under: PowerShell — Tags: , , , , , — Devlin Bentley @ 1:06 pm

There are multiple ways to get the drive letter of a mounted VHD in PowerShell. The most common way is to enumerate all drive letters, mount a VHD, enumerate all drive letters again, and find out which new drive letters have appeared. This strategy is NOT safe and will break down if you have more than one program or instance of your script trying to use this method at the same time.

The reason why it isn’t safe is quite obvious. Assume you have a machine with just one active drive at start, “C”.

  1. Script 1 enumerates all drives, gets back a list {“C”}
  2. Script 1 mounts¬†VHD1, VHD1 is assigned drive letter “D”.
  3. Script 2 enumerates all drives, gets back a list {“C”, “D”}
  4. Script 2 mounts VHD2, VHD2 is assigned drive letter “E”.
  5. Script 1 enumerates all drives, gets back a list {“C”, “D”, “E”}

At this point Script 1 is not sure which drive belongs to the VHD it mounted. If you have a UNIQUE volume name, great! You can do select based on volume name and you are in luck.

If you don’t though, you have ran into the limitations of this technique.

But there is a better way!

Credit goes out to the PowerShell Management Library for HyperV. They do it properly!

First thing to know is that VHDs are mounted as virtual SCSI Disks. A virtual SCSI Disk can be uniquely identified by a combination of LUN, SCSI Target ID and SCSI Port. Our basic strategy is going to be mapping from Mounted VHD path to a Virtual SCSI Disk and then digging into that Disk object to find out what drive letter it has.

So lets break out some WMI shall we?

# Given the full path to an already mounted VHD and the name of a volume on it,
# returns the drive letter that VHD was mounted to
function GetDriveLetterOfMountedVHD($FullPathToVHD, $VolumeName)
   $MountedDiskImage = Get-WmiObject -Namespace root\virtualization -query "SELECT * FROM MSVM_MountedStorageImage WHERE Name ='$($VHDPath.Replace("\", "\\"))'"
   $Disk = Get-WmiObject -Query ("SELECT * FROM Win32_DiskDrive " +
        "WHERE Model='Msft Virtual Disk SCSI Disk Device' AND ScsiTargetID=$($MountedDiskImage.TargetId) " +
        "AND   ScsiLogicalUnit=$($MountedDiskImage.Lun)   AND ScsiPort=$($MountedDiskImage.PortNumber)" )
    $Partitions = $Disk.getRelated("Win32_DiskPartition")
    $LogicalDisks = $Partitions | foreach-object{$_.getRelated("win32_logicalDisk")}
    $DriveLetter = ($LogicalDisks | where {$_.VolumeName -eq $VolumeName}).DeviceID
    return $DriveLetter

The key thing to notice here is that you are asking WMI for a MountedStorageImage based on the full path of the VHD you mounted. This guarantees that you are not conflicting with any other script’s VHD¬†activities. All the info returned to you is only about the VHD you mounted.

The rest of the function is pretty straight forward. It can actually all be done in one line but I expanded it out here for clarity.

  1. Using the knowledge you have about the MountedDiskImage’s¬†assigned Virtual SCSI info, get a Win32 Disk Drive by searching on the matching LUN, TargetID and SCSI Port
  2. Get a list of partitions on that disk.
  3. Get a list of logical disks (the things you see in My Computer) associated with each partition
  4. Return the drive letter of the volume that you want.

Now if your logical disks happen to have identical volume labels you can index into $LogicalDisks¬†and pick out which one you want that way, and so long as you don’t go rearranging partititions¬†in your VHD¬†that may work just fine.¬†In addition, you can replace the last where {$_.VolumeName¬†-eq …} bit with something unique to your situation (Size, FileSystem, etc).

One final note, due to the use of the virtualization WMI namespace, this code will only work on Windows Servers that have the Hyper-V Role installed. With the announcement that Wndows 8 is getting Hyper-V I am hopeful that the WMI virtualization namespace will become available to client OSs as well!

September 21, 2010

The commoditization of the Smartphone Market

Filed under: technology — Devlin Bentley @ 10:43 am

(Note:  I am no longer an employee of Microsoft nor am I in any way at presently related to either Windows Phone or Windows Mobile.  All opinions expressed here are mine and mine alone.)

A Nokia Executive recently said that phone manufacturers using Android is “Like peeing in your pants to stay warm.”¬† The implication there being that over the long-term the manufacturers are just hurting themselves.

He is correct.

In the long-term, using Android or Windows Phone is a doomed strategy for the phone manufacturers.  The executives of these companies (HTC, Motorola) know this, but they really have no other choice.

For many years now Microsoft has been trying to repeat their successful desktop business strategy in the smartphone market.  As a quick refresher, their desktop strategy was quite simple: 

  1. Make a¬†highly usable desirable OS, licence it to computer OEMs to sell with their machines.¬† This means all computers sold by different manufacturers will be, to a large extent, identical underneath the lable.¬† Minor HW differences aside, consumers will desire a “Windows” computer.¬†
  2. Competition between manufacturers will drive down price, thereby increasing the number of people who can afford to buy a computer, encouraging more manufacturers to enter the market, further driving down prices.  Rinse, wash, repeat, after about two decades an entry-level computer now costs $200.

If you are an OEM though, there is a nasty side effect: The profit margin on a new computer is almost $0.  On lower priced machines those annoying sponsored apps that come pre-installed are often the only way companies like Dell and HP make a profit at all.

Initially things weren’t so bad.¬† Companies such as Dell and HP had a healthy profit margin.¬† It was when the first round of drastic price cuts hit in around 1999 that you saw companies like Gateway[1] pretty much cannibalize themselves to stay alive.

Microsoft has been trying to apply this same strategy to the smartphone market for nearly a decade now.¬† When the smartphone¬†OEMs¬†looked at what happened to Dell and Gateway, and then looked at Microsoft’s offer to take Windows Mobile to the masses, they basically said “no thanks”.¬†¬† OEMs put the minimal effort needed into selling Windows Mobile to business customers and they never dedicated the resources to it that Microsoft was hoping they would.¬† After Palm’s initial dropping out of the smartphone market, OEMs stopped competing at all.¬† Hardware, prices, screen size, resolution, all stayed the same for years.¬†This pleased the OEMs, Smartphones stayed a niche high margin¬†market, being bought by¬†businesses and tech enthusiasts.¬†¬†Eventually an excuse for the static growth of the Smartphone¬†market appeared: “Consumers¬†do not want to buy smartphones”.¬†Eventually both the OEMs and many within Microsoft started to believe in this excuse.

Apple changed all of that.  Apple demonstrated that you can sell Smartphones to everyday consumers, and pretty much mint money in the process. On some of the earlier iPhones Apple was making over $200 profit per phone sold.  That type of profit margin on technology is just insane.

What happened next was predictable. Everybody starting ramping up Smartphone manufacturing.  Unlike Apple, they are using commodity OSes, either Android, or Windows Phone.

And now they are all doomed.

Right now smartphone¬†OEMs are competing on features.¬† Resolution, processor speed, memory, storage, camera resolution, screen quality, are all increasing while prices are staying the same.¬† Phone OEMs are desperate for new features to add to justify keeping prices up, just look at all the buzz about dual core ARM processors that are “coming soon”.¬† But no matter how hard the OEMs¬†struggle, prices will begin to fall as features level off.¬†

And the history of the PC market will finally repeat itself.

Microsoft has been working towards this for years.  You have a bunch of really smart people who see the potential of technology, and they have been trying to get that technology out to the masses for over a decade now.  Apple demonstrated that the market is there, Google came along with a shinier OS that is popular right now, but no matter which company ends up winning the mobile handset OS race the overall lesson is that history is going to repeat itself. Prices are going to drop, and OEMs are going to have smaller and smaller margins on handsets.

The phone OEMs know this.¬† They don’t want it to happen.¬† The smarter executives see the writing on the wall and are making money off it while they can, the foolish ones may actually believe the current state of affairs (having a profit margin to speak of) is going to last.¬† It won’t.¬†

As for Apple?

After a brief flare of massive profitability they will once again fade to being a niche player.  History repeats itself.  They cannot maintain their insane profit margins for much longer, once features level off Android and Windows Phones are going to start to drive down the average cost of smartphones. Apple will either have to lower their prices to compete or once again become relegated to being a high-class luxury brand.

[1] In the mid 1990s Gateway had a reputation for having the best customer service, something I can personally attest to.  My $2000 Gateway computer came with in-house tech support, when something broke a tech guy came and swapped the broken part out. When the price cuts hit Gateway pretty much destroyed their customer service reputation, it has taken them many years to rebuild it.

July 11, 2009

My New PC

Filed under: Hardware Review, technology — Tags: — Devlin Bentley @ 3:17 am

Waiting for Brown

On days like today I think UPS is worse than Santa for one single reason:  Santa does not taunt you with a tracking number.

Yes, Friday was a day spent waiting for UPS to come and delivery goodies in boxes, boxes containing many parts, and with much assembly required.

Though personally I happen to think that the end result looks rather good

Front of Case Power Off

Devlin's PC Build 2009 042

But getting there was not as easy as it should have been.

Of course, where are my manners, first the technical specs for those of you who care.

Geek Specs

CPU: AMD Phenom II X4 905e 2.50GHZ (energy efficient 65 watt version of the quad core 2.5GHZ Phenom II)
Motherboard: Asus M4A78-EM MicroATX
Video Card: HIS ATI HD4850
RAM:  8GB DDR2 PC1033
Primary HD: OCZ Vertex 64GB SSD drive.
Secondary HD: 250GB drive from previous machine
Case: NZXT Rogue MicroATX Silver (Brush Aluminum)
Power Supply: Some crazy Antec unit I probably paid way to much for
OS: Windows 7 RC

Naturally everything except the power supply and OS was purchased on

Until about half an hour after first boot I thought the SSD drive was a waste of money.  Since then I have had continuous issues involving picking my jaw up off of the floor.

NZXT Rogue Review

This is, no doubt about it, a very lovely case.  On sale for $80, it was a very affordable, lovely case.  Of course three days after I bought one its sale price has been dropped by $10 down to $70 but such is life in the world of technology.  My CPU is also $5 cheaper now.  ūüôā

In spite of its good looks, this case, as many of the Newegg reviews hint at, has some issues.

Getting the Case Apart

I have never had to remove so many screws to get a case apart.  It must have been around 10 screws total to get to the hard drive cage.  This also counts removing the motherboard tray, since builds with this case must be done in a very particular order or else you will find yourself stuck.

In the past I have built many computers where the hard drive/CD-ROM drive mounting cage eloquently popped out of the case, I slide my drives in, threw on some screws, and popped the cage back into the case, and went on my way.  This case is nothing like that.

Granted few MicroATX cases allow removal of the drive cage once other components are in place, but even so a removable drive cage would have shaved at least an hour, if not even more, off of how long it took to do the initial installation of components into this case.

The 1 Hour DVD-ROM Drive Install

Let me put a disclaimer in here and say that I have built many, many, PCs.  I have done quite a few years of lab work at various times in my life, in addition to building PCs at home. Going at a good clip and after I get into the rhythm of things, in an hour I could install a dozen plus DVD-ROM drives.

This case aims to adjusts one’s expectations in a rather downward direction.

I am guessing that my DVD-RW drive just so happened to be a fraction of a millimeter too wide to fit when using the included drive rails.  Abandoning the rails I tried to screw the drive in, but the screws that had secured the drive in its previous case would not reach due to the layer of sound dampening material that lines all the drive enclosures.  All the drive enclosures are a little wider than normal to make room for the sound dampening material, so normal mounting screws do not reach.

If you add together how long it took me to unscrew my way to the drive cage and how much pushing and shoving I had to do to get the drive into place, it was at least an hour to get a single DVD drive installed.  It was immediately after declaring success that I noticed NZXT‚Äôs included bag of "‚ÄĚCD-ROM screws‚ÄĚ happened to be a bit longer than typical 5.25‚ÄĚ drive mounting screws, such as the ones I had failed to use.

I have a box of 5.25‚ÄĚ drive mounting screws (as a subset of my larger ‚Äúcomputer screw box‚ÄĚ).  They are all the same thread pitch, same length.  The screws used in this case are not.  I can‚Äôt fault NZXT for having sound dampening material in their case, quite cases are good cases, but unusual screws are, well, unusual.

Tin Foil Motherboard Tray

If the metal used for the Motherboard tray was any thinner, it would qualify as tin foil.  I do not think I have ever seen a piece of any computer case that was so easily deformed, and I have repaired a lot of cut rate computers. The thinness of the metal was quite surprising given that

This Thing Weighs a Ton

Ok not a ton, but the case does weigh around 20 pounds.  Much to the amusement of reviewers, NZXT advertises it as a portable LAN party case.  Thankfully I did not buy it with any intentions of taking it to LAN parties, but the included carrying strap (carrying harness?) is all the more hilarious for thoughts of what would happen to anyone‚Äôs back if they pretended that this case was portable in any sense of the word except for ‚Äúnot nailed down‚ÄĚ.

To make it clear, I am not criticizing the case for its weight.  I wanted a case made of brushed aluminum and I got it, and I am much more pleased with it than I am with the ‚Äúbrushed aluminum door cover‚ÄĚ style case that I purchased previously.

However I do find it amusing that my MicroATX based computer, weighing in at 31LBs total, feels like it is just a few pounds shy of my full tower case, which has yet to be formally weighed.

Seriously, this case is unexpectedly heavy.  I am still not quite accustom to the idea of such a small computer weighing so much.

More Complaints about Assembly

This would be a really good case if assembly wasn‚Äôt so evil.  Of course not that many cube‚Äôs have a total of five 3.5‚ÄĚ bays, (four internal, one with an external facing), but actually installing anything into the drive cage is a very convoluted process.  A good example of this is how installing any 5.25‚ÄĚ device first requires removing any installed 3.5‚ÄĚ devices, as can be clearly seen in one of Newegg‚Äôs promotional shots:

NZXT Cube Newegg Drive Bays

Notice 3.5‚ÄĚ drives are mounted vertically next to the 5.25‚ÄĚ bays.

Thankfully I read the manual first which clearly warns users to install any 5.25‚ÄĚ drives first.

Speaking of the manual, the pictures are either so poorly taken or poorly printed as to be universally too dark and it is indistinguishable which part of the case is being shown in any of the (far too few) photos.

Lots of Bags

Many bags of screws, all very well labeled.  Awesome job on this.

What’s Up With The Fans?

None of the chassis fans use smart fan plugs, instead they all opt for Molex.  I can understand the fans with LEDs perhaps needing more power, but the side fans could stand to have proper speed controlled monitored plugs on them.  Although I must admit one very awesome thing about the fans is that

The Fans Have Filters

Fan Filter on Side Case Panel (repeat for other side panel)

Except for, you know, the huge fan in back which just has a standard useless grill

Rear Case Fan

If the 3 included case fans are not enough, it is possible to install 2 more fans, 1 on each side of the case.

Well it would be, almost.

Where Are The Rest Of My Washers?

Case Side Fan Washers

NZXT installs filters on the two spare fan slots, but only includes enough washers for one more fan.  This confuses me a bit.

Other Build Issues

My front USB ports are busted

Broken Front USB

The top port does not work at all, Window‚Äôs does not recognize when something is plugged into it, while anything plugged into the bottom port results in the error ‚ÄúA USB Device Has Malfunctioned‚Ķ‚ÄĚ

Yes, I‚Äôm a little pissed about this.  Come on folks.  20 lbs of brushed aluminum and you skimp out on the build quality of the front USB ports?  Even the internal header is flimsy! (Admittedly in my experience internal USB headers on cases are always rather flimsy.)

The plastic top window also looks like a layer of it is peeling away

Top Window Peeling

I apologize for the quality of the picture.  Apparently to my digital camera the case‚Äôs top window is just slightly more reflective than a mirror.

It actually looks a fair bit worse in person.  At first I thought it was another protective layer of plastic wrap, but it is something internal to the plastic window itself.

Neither of these issues is going to get me to RMA the case, mostly because of how hard it is to put together.

Other Thoughts

I have no internal cable running skills

Internal Cable Nest

This is why I did not want a case with a side window.  :)  After 20 more minutes of attempts at organizing the cables the situation did not improved by much.

The Motherboard has PCI slots.  I have no clue why.  I would much prefer another PCIe 16x slot so I could put another 4850 in here next year when they‚Äôll be going for $50 a piece.

SSD drives are fast.  I read about it being like ‚Äúnight and day‚ÄĚ but of course I didn‚Äôt take it seriously.  I do now.

Overall Performance


Apps install so quickly it is insane.  The system boots so quickly it is crazy.  This BIOS has a ‚Äúfast on‚ÄĚ feature where it can boot to some minimal OS ASUS provides.  This ‚Äúfast‚ÄĚ boot time is 10 seconds.

Why bother when I can wait 15  and be into Windows?

Older Posts »

Blog at