Peace For All

March 13, 2013

C# Feature Need: Lambdas having an implicit reference to themselves

Filed under: C#, Programming — Tags: , , , — Devlin Bentley @ 1:07 pm

I really want a “me” keyword in C# that gives a reference to the current lambda.

For event handling it is often useful to have a one time event that detaches itself after it has fired. Currently this isn’t too hard, you assign said event to a nice Func<T…> or Action<T…> (or actual event handler delegate type) and when you reference it in your lambda (while unassigning it) it gets captured in your closure.

An example of this is:

EventHandlerType handler = null;
handler = (string receivedString) =>
    {
        this._someString = receivedString;
        EventClass.Event -= handler;
    };
EventClass.Event += handler;

As you can see above, handler is my event handler, it takes in a lone string (because event args is for chumps), assigns said string to a class member variable, and then detaches itself.

This isn’t horrible, but it still is a fair pain in the arse. I’d presuming the lambda already has a reference to itself somewhere, so my creating an explicit one seems sort of redundant. Also it is an unneeded closure, quite frequently that is the only variable I am closing over, which means I have a fairly sizable overhead just to capture a reference to myself!

On a separate note, I wonder if declaring your handlers as class members optimizes this in any way. I am not 100% sure if they are captured over, I should read up on it to see if I can find clarification. Thinking about it some more, there may be times when there is a need to capture them, but if they are public members this might not be needed. I am now wondering if the C# compiler is smart enough to optimize this away.

Anyway, none of that would matter if C# had a keyword that said “give me a reference to the bloody function I am in right now!”

And hey, type inference means the syntax could be really nice! 🙂

(And if there is already a way to do this, that doesn’t involve gobs of reflection code, please do tell!)

Now this really becomes a pain when you are trying to chain event handlers together. I have some annoying lock step code I need to write where I do a call-handle-call-handle over a network channel. Each message I get is of the same type (so it flows through the same event), but the handler has to be different each time.

Now obviously I could make one giant lambda that tracks its state and how many times it has responded to messages, but I much prefer simpler lambdas that do exactly one thing. Thus I am constantly unassigning and reassigning handlers to the same event. My code would be a lot cleaner if I didn’t have to predeclare all my handlers.

(Of course this code is dealing with an impedance mismatch between an non-OO system and my OO system, so the code is going to be somewhat ugly, but I prefer to minimize this as much as possible!)

July 3, 2012

Why Would Anyone Giveup Dreams?

Filed under: Life in general — Devlin Bentley @ 11:06 am

The music in my dreams, the sights, and the visions. The artistry and the wonder, the potential untapped. Never would I surrender eight hours of dreaming the future, discovering my past.

May 2, 2012

Switcher, an awesome alt-tab replacement, with search!

Filed under: Life in general, technology — Tags: , , , — Devlin Bentley @ 12:11 pm

I was needing an alt-tab replacement that allowed me to search open windows (yes I have that many windows open!), and after a few minutes of searching I found the amazing utility Switcher. The animations are a bit slow, but you can turn them off and have a really rapid alt-tab replacement utility that allows for search! Search is amazing, I have 20 windows open right now, alt-tabbing through them is generally a pain, but I type at 120WPM, so searching is faster than using my mouse of having to hit “alt-tab”, do a visual check of which app is selected, rinse, wash, repeat.

My only complaint is that when using multiple monitors, which monitor search results show up on seems fairly arbitrary. It also seems to split across screens, but it would be nice if there was a way to tell it to stick to one screen or the other.

But those are minor complaints compared to the amount of time and frustration I am saving with it!

April 26, 2012

My favorite meal to cook

Filed under: Life in general — Tags: , , , , , , — Devlin Bentley @ 11:09 am

I get asked this question a fair bit, so I decided to make a blog post about it. For those of you who don’t know, one of my passions is cooking healthy food at home, from traditional American favorites to dishes from around the world.


On multiple occasions the question “What the best meal you have ever made?” has been asked of me. That, it turns out, is a question that requires a fair bit of detail to answer.

Let me start off with yesterday: my dinner consisted of a homemade soup, tasty if not simple and utilitarian. I had prepared it the night prior with the intent of consuming after returning from work, which is exactly what I did.

But that soup is of no great consequence, the effort put in was minimal and the result sufficient. It serves only as an example of how I prefer to plan my meals during my work week.

Twice Baked Potatoes

Now, if one asked my friends and family which of my dishes was their favorite, I can promise that the answer would be my Twice Baked Potatoes. All twice baked potatoes  start from the same base: Potatoes innards, generous amounts of sour cream, an equally good measure of grated sharp cheddar cheese, a bit of butter, chopped green onions, and mayhaps a bit of crushed garlic. To this I add my one custom ingredient, the ingredient that shocks and amazes: At least two large peeled shrimp which are placed into each potato shell after it has been stuffed.

Preparation instructions are the same as for any other twice baked potato, merely bake the potato to near completion, cut in half, scoop out the insides, prepare the stuffing as described above, insert stuffing back into the shells, and bake again for a short bit of time.

Simple, though a bit of leg work in practice, but the end result is wonderful. No one expects, but everyone has been pleased by, the shrimp.

72 Hour Smoke Bacon Stew

Now I will not argue that my twice baked potatoes are not one of my best dishes, surely they are. But they are not the dish into which I put the most love into the creation of.

What I pour my heart into is the making of my 72 Hour Smoked Bacon Stew.

The first step in its creation is to acquire bone in smoked bacon from a European deli, of which I am thankful that I live near a number of. The bacon is then placed into a large pot which has been filled with tomato sauce (if I am truly feeling up to it, I have made the tomato sauce myself from purchased tomatoes). Some simple seasonings are added to the pot, a few bay leaves, and a wonderful chili powder variant that one can only acquire online.

This is then allowed to stew for a wee bit less than 3 days. Being tended to and watched carefully so that the broth does not boil away or bubble over.

On the first day, the house is filled with a wonderful smokey aroma. It is a pleasant harbinger of things to come.

On the second day, both the meat and the fat has fallen from the bone and it can be seen where the bone marrow itself has started to fall apart.

It is on the third day that one awakens to find that the bones have given themselves up to make the broth complete.

Now, what is left over are mere details. At this juncture I most often add a variety of beans to make a good hearty stew, but a variety of different rices will do justice as well. Other vegetables are added as needed and as requested by those I will be serving.

It takes a lot of time and dedication to cook, and the result turns out differently each time. But, when done properly, and there are many places to make mistakes, it is by far my favorite dish to prepare.

April 20, 2012

How Microsoft can take over the High End Gaming Keyboard market

The picture below is of the Microsoft Sidewinder X6, a largely forgotten gaming keyboard from Microsoft.

Image

It was, and still is, close to being the best gaming keyboard ever made. Why?

  • A swappable numeric keypad that can be turned into a macro pad means that MMO players are happy.
  • Convenient macro keys close to the WASD cluster so FPS players can have their fun as well
  • It has a red backlight that does not ruin your night vision, it also looks less tacky then blue backlights which are starting to have a backlash against their overuse.
    • The backlight’s brightness can be easily adjusted through the left knob up on top. This makes it really simple to just twist the knob and turn off the backlight before going to bed. No strange key combination to remember.
  • There is a volume control knob, for lighting quick changes in volume level, no pounding on the Vol- key while your ears are getting blasted.
  • A full set of media playback keys, meaning there are no strange hotkeys or function+key combinations to remember.

Now, that said, this keyboard is not perfect. It does not have N-Key Rollover. which is very very unfortunate. The keyboard that came after it, the Sidewinder X4, has amazing NKRO and red backlighting, but is otherwise a very utilitarian keyboard. This fits its role as a low cost gaming keyboard, but it entered into a very crowded market and it didn’t really taken the world by storm.

The other problem with the X6 is that the high end gaming keyboard market has moved on. The current big thing is Cherry MX switches of various types. Right now only a few manufactures are making gaming keyboards with Cherry MX switches, and with the exception of Corsair’s Vengeance series, all the Cherry MX gaming keyboards are fairly spartan in their feature offerings. Many of them do not even have media control keys and the vast majority have the same styling as regular cheap PC pack-in keyboards.

I believe that when you take into consideration all these factors (Microsoft’s excellent design work on the X6 and the lack of real competitors in this product space) that Microsoft is in a great position to enter into and dominate the market for high end gaming keyboards.

How? Quite simple: Release an updated version of the Sidewinder X6 with NKRO that uses Cherry MX switches. Offer it in two SKUs, one with Brown switches and one with Red. (The CherryMX brown sku could even have a limited production run, but it would serve the purpose of getting excellent press amongst enthusiasts.)

This would immediately place Microsoft’s offering at the top of the pack for Cherry MX gaming keyboards by offering more features than any other gaming keyboard of comparable quality. The X6’s design was already great, and re-released and updated it has the potential to be the best gaming keyboard sold by anyone.

The second aspect of this is doing a proper marketing campaign. Thankfully there are so few CherryMX gaming keyboards out on the market right now that getting reviewers to take a look at your product is comparably easy, as is building up a good grassroots base on forums. If MS sets out full throttle on both paths, top down and ground up, a new CherryMX X6 should be well received by a community that eagerly awaits the latest high quality products.

October 18, 2011

Beware of the Current Directory when using PowerShell Remoting!

Filed under: PowerShell, Programming, technology — Tags: , , , , — Devlin Bentley @ 2:04 pm

Are your files appearing in strange places? Or maybe not appearing at all? Does everything work when run locally, but when remoting all of a sudden things work a bit differently?

Be aware that when using PowerShell remoting that your working directory may not be what you expect!

Create a simple test script that writes out the value of Get-Location to a log file at an absolute path. Run this script remotely to figure out what your actual default location is!

At the top of your scripts it may be a good idea to use Set-Location to make sure your current working directory is what you think it is. This is especially true if you try to access files relative to your script’s location. (This is good advice anyway!)

Also note that PowerShell tracks its current working directory differently than Windows does. A really good explanation of this exists at http://huddledmasses.org/powershell-power-user-tips-current-directory/

 

September 25, 2011

PowerShell Call Operator (&): Using an array of parameters to solve all your quoting problems

Filed under: Life in general, PowerShell, Programming — Tags: , — Devlin Bentley @ 7:30 am

I would like to thank James Brundage (blog!) for telling me about this. Suffice to say, the man is seriously into automation.

Alright, if you just want to learn about using arrays of parameters with the call operator (&) and skip all the explanation of what doesn’t work, scroll down to the bottom. I am a big believer in understanding solutions though, so this post will detail everything that doesn’t work and slowly build up towards what does work.

The last blog post I did on this topic was about using Invoke-Expression to solve problems with passing parameters to external programs. I resorted to using Invoke-Expression since (as an undocumented side effect?) Invoke-Expression will strip off quotes from parameters to commands it executes. But in some circles using Invoke-Expression to execute programs is considered heresy. It is thanks to James Brundage that I was able to figure out how to better use & and also come to a greater conscious realization of how PowerShell handles strings.

To summarize the problem, try to get the following to run in PowerShell

$pingopts = "www.example.com -n 5"
ping $pingopts

If you run this command ping will spit out an error, the root cause of the problem is that PowerShell passes $pingopts to ping with the quotes still on it, so the above line is the same as typing

ping “www.example.com -n 5”

Which is obviously quite wrong.

The next obvious solution is to use the call operator, “&”. The call operator is how you tell PowerShell to basically act as if you had just typed whatever follows into the command line. It is like a little slice of ‘>’ in your script.

Now the call operator takes the first parameter passed to it and uses Get-Command to try to find out what needs to be done. Without going into details about Get-Command, this means the first parameter to the call operator must be only the command that is to be run, not including parameters. The people over at Powershell.com explain it really well.

With all this in mind, let us try the following

$pingopts = "www.example.com -n 5"
&ping $pingopts

Run that and you will get the exact same error. Fun!

Why is this happening?

The problem is that & does not dequote strings that have spaces in them.

So this code works:

$pingopts = "www.example.com"
&ping $pingopts

Where as

$pingopts = "  www.example.com"
&ping $pingopts

will not.

But if we think about this for a minute, we already know about this behavior. Heck we expect it and rely on it. It is so ingrained into how we use PowerShell that we don’t even think about it, except for when we run head first into it. So now let us explicitly discuss PowerShell’s handling of strings.

String Quoting Logic

The string auto quoting and dequoting logic is designed around passing paths around. The rule, as demonstrated above, is quite simple. A string with a space in it gets quoted when passed to something outside of PoSH, while a string without spaces in it has its quotes stripped away. This logic basically assumes if you have a space, you are dealing with a path and you need quotes. If you don’t have a space, you are either dealing with a path that doesn’t need quotes, or are passing something around that isn’t a path and you do not want quotes. For those scenarios PowerShell gives exactly the results people want, which just so happen to be the results people need 95% of the time.

Problems arise  when you have strings with spaces in them that you do not want quoted after leaving the confines of PowerShell. Bypassing the string quoting/dequoting logic is not easy and you can end up resorting to Invoke-Expression hacks like I detailed earlier or you can try to find a way to work within the system. The latter is obviously preferable.

The Solution

You may have already guessed the solution from the title of this blog post: Pass an array of parameters to the call operator. Given the sparse documentation available online for & (it would be nice if it said string[] somewhere), one has to have a fairly good understanding of Powershell to figure this out on their own, or just randomly try passing an array to &.

The key here is working the system: by passing parameters in an array you can avoid having spaces in your quoted strings. Where you would normally put a space, you break off and create a separate array element. This is still a bit of a work around, it would be optimal to find a way to tell & to dequote strings, but this solution does work.

Code:

$pingopts = @("www.example.com", "-n", 5)
&ping $pingopts

Again, notice instead of “-n 5”, I split it into two array elements.

Just for reference, here is how you would build that command up line by line using an array:

$pingopts = @()
$pingopts += "www.example.com"
$pingopts += "-n"
$pingopts += 5
&ping $pingopts

This actually is not much different from constructing 3 separate variables and passing them in after ping:

$param1 = "www.example.com"
$param2 = "-n"
$param3 = 5
&ping $param1 $param2 $param3

Which is the blatantly obvious solution but also the ugly one so I never even considered it. Of course using arrays is more flexible since you can declare at top and slowly build up your command line throughout your script.

Hopefully this saves everyone some time and the journey has helped you understand a bit more about Powershell.

September 22, 2011

Remotely Executing Commands in PowerShell using C#

Filed under: C#, PowerShell, Programming — Tags: , , — Devlin Bentley @ 4:02 pm

At first glance, this seems like it should be easy. After all: remoting using PowerShell is dirt easy! Once the server is configured you are a single Invoke-Command away from remoting bliss! So how hard could it be in C#? Surely there is some functional equivalent to Invoke-Command that gets exposed, right?

Well no, it is not that simple. It isn’t too bad, but the lack of any examples on MSDN makes some aspects of doing this quite literally impossible to figure out if you are using MSDN documentation alone.

Thankfully by piecing together posts from the MSDN Forums it is possible to get something working. Having done just that, I figure I’d save you all the time and effort.

Precursor

Get a reference to System.Management.Automation into your project. The best way to do this is to manually edit the csproj file and add the line

<Reference Include=”System.Management.Automation” />

Yes this is ghetto. I am not sure why that library is not sitting with the rest of them in the proper location that would get it listed in the “Add References” dialog.

Goal

Calling in from C# code, execute PowerShell  code on a remote machine.

Tools

A bunch of poorly documented object constructors.

The key to getting this all working is to properly construct an instance of WSManConnectionInfo and pass that on to the RunspaceFactory.CreateRunspace(..) method.

Step 1: Constructing a PSCredential

This step is pretty straight forward. PSCredential has only one constructor, it takes in a username and a SecureString password. The only gotcha is that userName includes domain if applicable.

Good luck on the SecureString PW part. Doing it properly (e.g. never storing your PW in a string at any step) can take some planning ahead, depending on your situation of course.

PSCredential remoteMachineCredentials = new PSCredential(domainAndUserName, securePassword);

 Step 2: Constructing WSManConnectionInfo

This is the hard part. Mostly because the documentation on how to use this class is so poor, most of it consisting of how to establish a loopback connection to localhost.

With our particular goal in mind, we only really care about one of the overloads:

public WSManConnectionInfo (
bool useSsl,
string computerName,
 int port,
string appName,
string shellUri,
PSCredential credential
)

The documentation for this constructor (located at http://msdn.microsoft.com/en-us/library/dd978670(v=VS.9).aspx if you want to read it yourself) has such gems as

appName

The application end point to connect to.

Not very useful.

Lets go over these parameters one by one and look at what each one means.

useSsl – Pretty simple, a bool flag indicating if SSL should be used. Highly recommended for use, of course. Note that changing this changes what port number you will be using later on

computerName – The name of the machine you are connecting to. On local networks, this is just the machine name.

port – Thanks to the blog post at http://blogs.msdn.com/b/wmi/archive/2009/07/22/new-default-ports-for-ws-management-and-powershell-remoting.aspx we know what port numbers PowerShell remoting uses. 5985 if you are not using SSL, and 5986 if you are using SSL.

appName – This should be “/wsman”. I don’t know what it is or what other values you can use here, but thanks to the Emeka over on the MSDN forums (thread: http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/a0e5b23c-b605-431d-a32f-942d7c5fd843) we know that “/wsman” will get it working.

shellUri – Again, thanks to Emeka we know that this needs to be http://schemas.microsoft.com/powershell/Microsoft.PowerShell&#8221;. I am not sure what other values are acceptable, but that value does indeed work.

credential – Finally we come back to the PSCredential object we constructed earlier. Simple enough.

Step 3: Creating a Runspace

We thankfully return back to documented things. After constructing the WSManConnectionInfo just pass that into RunspaceFactory.CreateRunspace(…) and be done with it.

Step 4: Putting it all together

string shellUri = "http://schemas.microsoft.com/powershell/Microsoft.PowerShell";
PSCredential remoteCredential = new PSCredential(domainAndUsername, securePW);
WSManConnectionInfo connectionInfo = new WSManConectionInfo(false, remoteMachineName, 5985, "/wsman", shellUri, remoteCredential)

using ( Runspace runspace = RunspaceFactory.CreateRunspace(connectionInfo) )

{

runspace.Open();

Pipeline pipeline = runspace.CreatePipeline("<COMMAND TO EXECUTE REMOTELY>");

var results = pipeline.Invoke();

}

All in all, not too bad. The sad part it, in terms of figuring out what to do it is easier to create a local runspace and use Invoke-Command on it, but once you know what you are doing, it is only one extra line of code to execute scripts remotely!

September 21, 2011

PowerShell Tip: Building a command line for execution

Filed under: PowerShell, Programming — Tags: , , , — Devlin Bentley @ 4:59 pm

Update: This still works best for some scenarios, but learn about how to use the call operator to do this and work within the system!

So you want to build up a command line to execute some utility. Simple enough right? Lets use ping as an example. Say you want to ping example.com 5 times. Your parameters would look like http://www.example.com -n 5. Append this to the end of ping in PowerShell and off you go, right?

$pingopts = "www.example.com -n 5"
ping $pingopts

Run that in PowerShell and you will hit a small problem. Ping will give you an error saying about not being able to find the host http://www.example.com -n 5.

If you examine the command line that is used to execute ping (pick your favorite tool, I chose process monitor!) what is happening becomes quite clear. The command that was executed was this:

"C:\Windows\system32\PING.EXE" "www.example.com -n 5"

The problem is that the string quotes around $pingopts were kept in place. While keeping quotes is useful when passing paths around it is not what you want in most other circumstances. We need to make those quotes go away and thankfully you can use invoke-expression to do just that.

$pingopts = "www.example.com -n 5"
invoke-expression "ping $pingopts"

This code will work perfectly! If you know of any other solutions please post about them, I am sure there are many ways to solve this problem in PowerShell!

In summary: Generally when working with Powershell Cmdlets you don’t have to worry about strings, PowerShell and its cmdlets handle quoting and dequoting them them perfectly, but you have to be careful when interfacing with the non-PowersShell world.

September 9, 2011

How To Properly Get The Drive Letter of a Mounted VHD in PowerShell

Filed under: PowerShell — Tags: , , , , , — Devlin Bentley @ 1:06 pm

There are multiple ways to get the drive letter of a mounted VHD in PowerShell. The most common way is to enumerate all drive letters, mount a VHD, enumerate all drive letters again, and find out which new drive letters have appeared. This strategy is NOT safe and will break down if you have more than one program or instance of your script trying to use this method at the same time.

The reason why it isn’t safe is quite obvious. Assume you have a machine with just one active drive at start, “C”.

  1. Script 1 enumerates all drives, gets back a list {“C”}
  2. Script 1 mounts VHD1, VHD1 is assigned drive letter “D”.
  3. Script 2 enumerates all drives, gets back a list {“C”, “D”}
  4. Script 2 mounts VHD2, VHD2 is assigned drive letter “E”.
  5. Script 1 enumerates all drives, gets back a list {“C”, “D”, “E”}

At this point Script 1 is not sure which drive belongs to the VHD it mounted. If you have a UNIQUE volume name, great! You can do select based on volume name and you are in luck.

If you don’t though, you have ran into the limitations of this technique.

But there is a better way!

Credit goes out to the PowerShell Management Library for HyperV. They do it properly!

First thing to know is that VHDs are mounted as virtual SCSI Disks. A virtual SCSI Disk can be uniquely identified by a combination of LUN, SCSI Target ID and SCSI Port. Our basic strategy is going to be mapping from Mounted VHD path to a Virtual SCSI Disk and then digging into that Disk object to find out what drive letter it has.

So lets break out some WMI shall we?


# Given the full path to an already mounted VHD and the name of a volume on it,
# returns the drive letter that VHD was mounted to
function GetDriveLetterOfMountedVHD($FullPathToVHD, $VolumeName)
{
   $MountedDiskImage = Get-WmiObject -Namespace root\virtualization -query "SELECT * FROM MSVM_MountedStorageImage WHERE Name ='$($VHDPath.Replace("\", "\\"))'"
   $Disk = Get-WmiObject -Query ("SELECT * FROM Win32_DiskDrive " +
        "WHERE Model='Msft Virtual Disk SCSI Disk Device' AND ScsiTargetID=$($MountedDiskImage.TargetId) " +
        "AND   ScsiLogicalUnit=$($MountedDiskImage.Lun)   AND ScsiPort=$($MountedDiskImage.PortNumber)" )
    $Partitions = $Disk.getRelated("Win32_DiskPartition")
    $LogicalDisks = $Partitions | foreach-object{$_.getRelated("win32_logicalDisk")}
    $DriveLetter = ($LogicalDisks | where {$_.VolumeName -eq $VolumeName}).DeviceID
    return $DriveLetter
}

The key thing to notice here is that you are asking WMI for a MountedStorageImage based on the full path of the VHD you mounted. This guarantees that you are not conflicting with any other script’s VHD activities. All the info returned to you is only about the VHD you mounted.

The rest of the function is pretty straight forward. It can actually all be done in one line but I expanded it out here for clarity.

  1. Using the knowledge you have about the MountedDiskImage’s assigned Virtual SCSI info, get a Win32 Disk Drive by searching on the matching LUN, TargetID and SCSI Port
  2. Get a list of partitions on that disk.
  3. Get a list of logical disks (the things you see in My Computer) associated with each partition
  4. Return the drive letter of the volume that you want.

Now if your logical disks happen to have identical volume labels you can index into $LogicalDisks and pick out which one you want that way, and so long as you don’t go rearranging partititions in your VHD that may work just fine. In addition, you can replace the last where {$_.VolumeName -eq …} bit with something unique to your situation (Size, FileSystem, etc).

One final note, due to the use of the virtualization WMI namespace, this code will only work on Windows Servers that have the Hyper-V Role installed. With the announcement that Wndows 8 is getting Hyper-V I am hopeful that the WMI virtualization namespace will become available to client OSs as well!

Older Posts »

Blog at WordPress.com.