Peace For All

July 11, 2013

Thoughts on work ethic as a culturally competitive advantage

Filed under: Life in general — Devlin Bentley @ 12:52 pm

There has been another resurgence of talk about “working hard” versus “working smart”. On one side there is the straw man setup of the evil horrid slave driving American work ethic. On the other side stands those who say we need only work smarter not harder. 35 hours a week is plenty, take vacations when needed, and so forth.

Now those arguing for working smarter do have some valid points. There are still aspects in American industry that, through one means or another, encourage long hours of drudgery that could be reduced greatly. Examples of this abound in the software industry, oftentimes preceded by the phrase “we don’t have time to invest in infrastructure.”

In these cases, yes, if a weekly test pass involves scrambling around to run a bunch of manual tests in a disorderly fashion, with steps frequently missed causing bugs to slip through and necessitating slews of developer (and test!) overtime then indeed, in that case an organization does need to work smarter and not harder. Engineering Excellence (Microsoft’s term) best practices exist specifically to avoid these types of scenarios.

But there is another reason for why a team may be putting in really long hours: they are solving really hard problems. This is especially true when attempting to gain first mover advantage in a market, or when trying to out maneuver one’s competition.

Finding people who can solve difficult problems is itself no easy task, combine that with ramp up time and it may no longer be viable (or possible within a reasonable time frame) to hire more people. At this point asking someone to work Saturdays becomes the only possible course of action.

Now this by no means excuses projects that fall behind and do death marches to catch up, especially when crunch time extends month after month. But I am arguing that poorly organized projects are just one reason why extra hours may be put in.

All that said, I also believe that the American Work Ethic (or at least what survives of it) is qualitatively different then the work ethic of other cultures. Most.importantly I would like to make a distinction between the American work ethic and the frequently compared to work ethics of countries such as Japan and Korea.

Specifically I propose that America’s unique background has created a culture in which has provided America with a competitive advantage in the world economy, and what’s more, I also propose that very cultural advantage is America’s primary competitive advantage, far exceeding any other competitive advantages America may have.

So first in contrast to the work ethic of Japan[1]  The stereotype, that does seem to have a good deal of truth to it, is that Japanese businessmen (and from what I can tell, sexism makes it mostly men, if I am wrong about this please correct me in the comments below!) are expected to work long hours at the office with their overtime often spent doing make work, hours being put in just to keep up appearances. To this end, the expectation for extra hours do not come with an expectation of extra productivity, nor are the hours strictly a result of project mismanagement (although I am sure that does exist, as it is a potential within every human run endeavour).

I believe that trying to say that the American work ethic is comparable to the above description is a straw man argument. The American work ethic most certainly does come with an expectation of extra productivity. This in turn provides a competitive advantage as salaried employees are basically paid a reduced rate (when converted to hourly compensation) for their labor. 20hrs of overtime is half the salary of an entire new employee, not counting all the other costs a separate employee would incur upon an organization. Basically our advantage here is the expectations of extra productivity without the toxic aspects of expecting hours be put in just for the sake of appearances. (Though again that does exist in some American work environments, it is not universal to American culture)

With that straw man defeated, we can move on to other ways in which the American work ethic serves as an economic advantage.

One thing the work ethic allows for is the completion of difficult tasks at both reduced cost and in a reduced timeframe. Productivity may not scale linearly with hours, but to a certain extent (for at least some duration of months before exhaustion sets in) it does scale. This does allow projects to be completed in less time, at a lower cost (in terms of both time and employment) than they would otherwise be under a strict 40 hour (or less!) work week.

An example of this serving as an advantage is getting a minimum viable product out to market rapidly, the hard facts remain, putting in the hours is the best way to get a job done, especially if racing against competition!

If you put two equally talented teams together, plop one down in the Bay Area and one down in a well off EU country with strict labor laws, the American team is going to get done first, flat out. 6 day work weeks and no vacations will get a product out the door and into the hands of customers before 35 hour work weeks and vacation time ever does. (Just please give the American engineers a break after launch!)

So now to move onto another area in which I believe the American work ethic provides an advantage: Pride in work done.

I am not claiming that pride in one’s work is a uniquely American trait, nor will I claim it is universally shared by all Americans. I will however claim that it is a component of the overall American culture and part of the expected American work ethic in many places.

The honest fact is, people who take pride in their work will put in however many hours are needed to do a job properly. The result of this is a qualitatively better end product at no additional cost to the employer.

Again this is often just a matter of some problems being hard to solve and needing time invested in them. If a product ship date is fixed then more hours may be the only viable solution. With salaried employees and no overtime, quality becomes doable at a lower cost, and cultural expectations means one is looked down upon for merely doing a “good enough” job. This is especially true within many areas of the tech sector (not all obviously, lots of crappy American made software and designed hardware exists)

Giving an example of this, the previous project I was on we had a button on our product. There was a group of people (two or three) dedicated to making this the best damned button you had ever used in their life. When feedback came in that the button didn’t quite have enough of a solid feel when fully depressed the engineers in charge of the button ordered slews of new buttons manufactured with different springs to do comparisons on, when feedback came in that the button wasn’t quite grippy enough, multiple coatings were tried out, different rubber and plastic edges were experimented with. Over the course of the project I lost track of how many different variations of buttons got tried out.

All that effort for one specific button on one project.

I don’t even know the names of the engineers who worked on that button, but I have a lot of respect for their pursuit of perfection.

In contrast, the tablet I have has horribly loose buttons that are trivially actuated on accident, frequently resulting in the screen being turned on during travel draining the tablet’s battery.

The pursuit of perfection and taking pride in work done are the final aspects of the American work ethic that give us our economic advantage.

While all the different cultural traits discussed above exist in other cultures, they are uniquely combined in American culture. Expectations of high productivity, pride in work, and lack of expectation for compensation of overtime.

Of course other aspects exist within the American work ethic that I have not described here, many start ups exist because founders initially maintained 40 hours a week at work then went home to put in another 40 on their own project. These other aspects also greatly contribute to why America is so successful economically, but, well, I am typing this on a cellphone and I feel this post is long enough already!      

[1] I have seen the same comparisons made between the American work ethic and Korean work ethic as made in comparison to the Japanese work ethics but I have not read up enough on Korean business culture to feel comfortable making any comparisons myself).


March 13, 2013

C# Feature Need: Lambdas having an implicit reference to themselves

Filed under: C#, Programming — Tags: , , , — Devlin Bentley @ 1:07 pm

I really want a “me” keyword in C# that gives a reference to the current lambda.

For event handling it is often useful to have a one time event that detaches itself after it has fired. Currently this isn’t too hard, you assign said event to a nice Func<T…> or Action<T…> (or actual event handler delegate type) and when you reference it in your lambda (while unassigning it) it gets captured in your closure.

An example of this is:

EventHandlerType handler = null;
handler = (string receivedString) =>
        this._someString = receivedString;
        EventClass.Event -= handler;
EventClass.Event += handler;

As you can see above, handler is my event handler, it takes in a lone string (because event args is for chumps), assigns said string to a class member variable, and then detaches itself.

This isn’t horrible, but it still is a fair pain in the arse. I’d presuming the lambda already has a reference to itself somewhere, so my creating an explicit one seems sort of redundant. Also it is an unneeded closure, quite frequently that is the only variable I am closing over, which means I have a fairly sizable overhead just to capture a reference to myself!

On a separate note, I wonder if declaring your handlers as class members optimizes this in any way. I am not 100% sure if they are captured over, I should read up on it to see if I can find clarification. Thinking about it some more, there may be times when there is a need to capture them, but if they are public members this might not be needed. I am now wondering if the C# compiler is smart enough to optimize this away.

Anyway, none of that would matter if C# had a keyword that said “give me a reference to the bloody function I am in right now!”

And hey, type inference means the syntax could be really nice! 🙂

(And if there is already a way to do this, that doesn’t involve gobs of reflection code, please do tell!)

Now this really becomes a pain when you are trying to chain event handlers together. I have some annoying lock step code I need to write where I do a call-handle-call-handle over a network channel. Each message I get is of the same type (so it flows through the same event), but the handler has to be different each time.

Now obviously I could make one giant lambda that tracks its state and how many times it has responded to messages, but I much prefer simpler lambdas that do exactly one thing. Thus I am constantly unassigning and reassigning handlers to the same event. My code would be a lot cleaner if I didn’t have to predeclare all my handlers.

(Of course this code is dealing with an impedance mismatch between an non-OO system and my OO system, so the code is going to be somewhat ugly, but I prefer to minimize this as much as possible!)

July 3, 2012

Why Would Anyone Giveup Dreams?

Filed under: Life in general — Devlin Bentley @ 11:06 am

The music in my dreams, the sights, and the visions. The artistry and the wonder, the potential untapped. Never would I surrender eight hours of dreaming the future, discovering my past.

May 2, 2012

Switcher, an awesome alt-tab replacement, with search!

Filed under: Life in general, technology — Tags: , , , — Devlin Bentley @ 12:11 pm

I was needing an alt-tab replacement that allowed me to search open windows (yes I have that many windows open!), and after a few minutes of searching I found the amazing utility Switcher. The animations are a bit slow, but you can turn them off and have a really rapid alt-tab replacement utility that allows for search! Search is amazing, I have 20 windows open right now, alt-tabbing through them is generally a pain, but I type at 120WPM, so searching is faster than using my mouse of having to hit “alt-tab”, do a visual check of which app is selected, rinse, wash, repeat.

My only complaint is that when using multiple monitors, which monitor search results show up on seems fairly arbitrary. It also seems to split across screens, but it would be nice if there was a way to tell it to stick to one screen or the other.

But those are minor complaints compared to the amount of time and frustration I am saving with it!

April 26, 2012

My favorite meal to cook

Filed under: Life in general — Tags: , , , , , , — Devlin Bentley @ 11:09 am

I get asked this question a fair bit, so I decided to make a blog post about it. For those of you who don’t know, one of my passions is cooking healthy food at home, from traditional American favorites to dishes from around the world.

On multiple occasions the question “What the best meal you have ever made?” has been asked of me. That, it turns out, is a question that requires a fair bit of detail to answer.

Let me start off with yesterday: my dinner consisted of a homemade soup, tasty if not simple and utilitarian. I had prepared it the night prior with the intent of consuming after returning from work, which is exactly what I did.

But that soup is of no great consequence, the effort put in was minimal and the result sufficient. It serves only as an example of how I prefer to plan my meals during my work week.

Twice Baked Potatoes

Now, if one asked my friends and family which of my dishes was their favorite, I can promise that the answer would be my Twice Baked Potatoes. All twice baked potatoes  start from the same base: Potatoes innards, generous amounts of sour cream, an equally good measure of grated sharp cheddar cheese, a bit of butter, chopped green onions, and mayhaps a bit of crushed garlic. To this I add my one custom ingredient, the ingredient that shocks and amazes: At least two large peeled shrimp which are placed into each potato shell after it has been stuffed.

Preparation instructions are the same as for any other twice baked potato, merely bake the potato to near completion, cut in half, scoop out the insides, prepare the stuffing as described above, insert stuffing back into the shells, and bake again for a short bit of time.

Simple, though a bit of leg work in practice, but the end result is wonderful. No one expects, but everyone has been pleased by, the shrimp.

72 Hour Smoke Bacon Stew

Now I will not argue that my twice baked potatoes are not one of my best dishes, surely they are. But they are not the dish into which I put the most love into the creation of.

What I pour my heart into is the making of my 72 Hour Smoked Bacon Stew.

The first step in its creation is to acquire bone in smoked bacon from a European deli, of which I am thankful that I live near a number of. The bacon is then placed into a large pot which has been filled with tomato sauce (if I am truly feeling up to it, I have made the tomato sauce myself from purchased tomatoes). Some simple seasonings are added to the pot, a few bay leaves, and a wonderful chili powder variant that one can only acquire online.

This is then allowed to stew for a wee bit less than 3 days. Being tended to and watched carefully so that the broth does not boil away or bubble over.

On the first day, the house is filled with a wonderful smokey aroma. It is a pleasant harbinger of things to come.

On the second day, both the meat and the fat has fallen from the bone and it can be seen where the bone marrow itself has started to fall apart.

It is on the third day that one awakens to find that the bones have given themselves up to make the broth complete.

Now, what is left over are mere details. At this juncture I most often add a variety of beans to make a good hearty stew, but a variety of different rices will do justice as well. Other vegetables are added as needed and as requested by those I will be serving.

It takes a lot of time and dedication to cook, and the result turns out differently each time. But, when done properly, and there are many places to make mistakes, it is by far my favorite dish to prepare.

April 20, 2012

How Microsoft can take over the High End Gaming Keyboard market

The picture below is of the Microsoft Sidewinder X6, a largely forgotten gaming keyboard from Microsoft.


It was, and still is, close to being the best gaming keyboard ever made. Why?

  • A swappable numeric keypad that can be turned into a macro pad means that MMO players are happy.
  • Convenient macro keys close to the WASD cluster so FPS players can have their fun as well
  • It has a red backlight that does not ruin your night vision, it also looks less tacky then blue backlights which are starting to have a backlash against their overuse.
    • The backlight’s brightness can be easily adjusted through the left knob up on top. This makes it really simple to just twist the knob and turn off the backlight before going to bed. No strange key combination to remember.
  • There is a volume control knob, for lighting quick changes in volume level, no pounding on the Vol- key while your ears are getting blasted.
  • A full set of media playback keys, meaning there are no strange hotkeys or function+key combinations to remember.

Now, that said, this keyboard is not perfect. It does not have N-Key Rollover. which is very very unfortunate. The keyboard that came after it, the Sidewinder X4, has amazing NKRO and red backlighting, but is otherwise a very utilitarian keyboard. This fits its role as a low cost gaming keyboard, but it entered into a very crowded market and it didn’t really taken the world by storm.

The other problem with the X6 is that the high end gaming keyboard market has moved on. The current big thing is Cherry MX switches of various types. Right now only a few manufactures are making gaming keyboards with Cherry MX switches, and with the exception of Corsair’s Vengeance series, all the Cherry MX gaming keyboards are fairly spartan in their feature offerings. Many of them do not even have media control keys and the vast majority have the same styling as regular cheap PC pack-in keyboards.

I believe that when you take into consideration all these factors (Microsoft’s excellent design work on the X6 and the lack of real competitors in this product space) that Microsoft is in a great position to enter into and dominate the market for high end gaming keyboards.

How? Quite simple: Release an updated version of the Sidewinder X6 with NKRO that uses Cherry MX switches. Offer it in two SKUs, one with Brown switches and one with Red. (The CherryMX brown sku could even have a limited production run, but it would serve the purpose of getting excellent press amongst enthusiasts.)

This would immediately place Microsoft’s offering at the top of the pack for Cherry MX gaming keyboards by offering more features than any other gaming keyboard of comparable quality. The X6’s design was already great, and re-released and updated it has the potential to be the best gaming keyboard sold by anyone.

The second aspect of this is doing a proper marketing campaign. Thankfully there are so few CherryMX gaming keyboards out on the market right now that getting reviewers to take a look at your product is comparably easy, as is building up a good grassroots base on forums. If MS sets out full throttle on both paths, top down and ground up, a new CherryMX X6 should be well received by a community that eagerly awaits the latest high quality products.

October 18, 2011

Beware of the Current Directory when using PowerShell Remoting!

Filed under: PowerShell, Programming, technology — Tags: , , , , — Devlin Bentley @ 2:04 pm

Are your files appearing in strange places? Or maybe not appearing at all? Does everything work when run locally, but when remoting all of a sudden things work a bit differently?

Be aware that when using PowerShell remoting that your working directory may not be what you expect!

Create a simple test script that writes out the value of Get-Location to a log file at an absolute path. Run this script remotely to figure out what your actual default location is!

At the top of your scripts it may be a good idea to use Set-Location to make sure your current working directory is what you think it is. This is especially true if you try to access files relative to your script’s location. (This is good advice anyway!)

Also note that PowerShell tracks its current working directory differently than Windows does. A really good explanation of this exists at


September 25, 2011

PowerShell Call Operator (&): Using an array of parameters to solve all your quoting problems

Filed under: Life in general, PowerShell, Programming — Tags: , — Devlin Bentley @ 7:30 am

I would like to thank James Brundage (blog!) for telling me about this. Suffice to say, the man is seriously into automation.

Alright, if you just want to learn about using arrays of parameters with the call operator (&) and skip all the explanation of what doesn’t work, scroll down to the bottom. I am a big believer in understanding solutions though, so this post will detail everything that doesn’t work and slowly build up towards what does work.

The last blog post I did on this topic was about using Invoke-Expression to solve problems with passing parameters to external programs. I resorted to using Invoke-Expression since (as an undocumented side effect?) Invoke-Expression will strip off quotes from parameters to commands it executes. But in some circles using Invoke-Expression to execute programs is considered heresy. It is thanks to James Brundage that I was able to figure out how to better use & and also come to a greater conscious realization of how PowerShell handles strings.

To summarize the problem, try to get the following to run in PowerShell

$pingopts = " -n 5"
ping $pingopts

If you run this command ping will spit out an error, the root cause of the problem is that PowerShell passes $pingopts to ping with the quotes still on it, so the above line is the same as typing

ping “ -n 5”

Which is obviously quite wrong.

The next obvious solution is to use the call operator, “&”. The call operator is how you tell PowerShell to basically act as if you had just typed whatever follows into the command line. It is like a little slice of ‘>’ in your script.

Now the call operator takes the first parameter passed to it and uses Get-Command to try to find out what needs to be done. Without going into details about Get-Command, this means the first parameter to the call operator must be only the command that is to be run, not including parameters. The people over at explain it really well.

With all this in mind, let us try the following

$pingopts = " -n 5"
&ping $pingopts

Run that and you will get the exact same error. Fun!

Why is this happening?

The problem is that & does not dequote strings that have spaces in them.

So this code works:

$pingopts = ""
&ping $pingopts

Where as

$pingopts = ""
&ping $pingopts

will not.

But if we think about this for a minute, we already know about this behavior. Heck we expect it and rely on it. It is so ingrained into how we use PowerShell that we don’t even think about it, except for when we run head first into it. So now let us explicitly discuss PowerShell’s handling of strings.

String Quoting Logic

The string auto quoting and dequoting logic is designed around passing paths around. The rule, as demonstrated above, is quite simple. A string with a space in it gets quoted when passed to something outside of PoSH, while a string without spaces in it has its quotes stripped away. This logic basically assumes if you have a space, you are dealing with a path and you need quotes. If you don’t have a space, you are either dealing with a path that doesn’t need quotes, or are passing something around that isn’t a path and you do not want quotes. For those scenarios PowerShell gives exactly the results people want, which just so happen to be the results people need 95% of the time.

Problems arise  when you have strings with spaces in them that you do not want quoted after leaving the confines of PowerShell. Bypassing the string quoting/dequoting logic is not easy and you can end up resorting to Invoke-Expression hacks like I detailed earlier or you can try to find a way to work within the system. The latter is obviously preferable.

The Solution

You may have already guessed the solution from the title of this blog post: Pass an array of parameters to the call operator. Given the sparse documentation available online for & (it would be nice if it said string[] somewhere), one has to have a fairly good understanding of Powershell to figure this out on their own, or just randomly try passing an array to &.

The key here is working the system: by passing parameters in an array you can avoid having spaces in your quoted strings. Where you would normally put a space, you break off and create a separate array element. This is still a bit of a work around, it would be optimal to find a way to tell & to dequote strings, but this solution does work.


$pingopts = @("", "-n", 5)
&ping $pingopts

Again, notice instead of “-n 5”, I split it into two array elements.

Just for reference, here is how you would build that command up line by line using an array:

$pingopts = @()
$pingopts += ""
$pingopts += "-n"
$pingopts += 5
&ping $pingopts

This actually is not much different from constructing 3 separate variables and passing them in after ping:

$param1 = ""
$param2 = "-n"
$param3 = 5
&ping $param1 $param2 $param3

Which is the blatantly obvious solution but also the ugly one so I never even considered it. Of course using arrays is more flexible since you can declare at top and slowly build up your command line throughout your script.

Hopefully this saves everyone some time and the journey has helped you understand a bit more about Powershell.

September 22, 2011

Remotely Executing Commands in PowerShell using C#

Filed under: C#, PowerShell, Programming — Tags: , , — Devlin Bentley @ 4:02 pm

At first glance, this seems like it should be easy. After all: remoting using PowerShell is dirt easy! Once the server is configured you are a single Invoke-Command away from remoting bliss! So how hard could it be in C#? Surely there is some functional equivalent to Invoke-Command that gets exposed, right?

Well no, it is not that simple. It isn’t too bad, but the lack of any examples on MSDN makes some aspects of doing this quite literally impossible to figure out if you are using MSDN documentation alone.

Thankfully by piecing together posts from the MSDN Forums it is possible to get something working. Having done just that, I figure I’d save you all the time and effort.


Get a reference to System.Management.Automation into your project. The best way to do this is to manually edit the csproj file and add the line

<Reference Include=”System.Management.Automation” />

Yes this is ghetto. I am not sure why that library is not sitting with the rest of them in the proper location that would get it listed in the “Add References” dialog.


Calling in from C# code, execute PowerShell  code on a remote machine.


A bunch of poorly documented object constructors.

The key to getting this all working is to properly construct an instance of WSManConnectionInfo and pass that on to the RunspaceFactory.CreateRunspace(..) method.

Step 1: Constructing a PSCredential

This step is pretty straight forward. PSCredential has only one constructor, it takes in a username and a SecureString password. The only gotcha is that userName includes domain if applicable.

Good luck on the SecureString PW part. Doing it properly (e.g. never storing your PW in a string at any step) can take some planning ahead, depending on your situation of course.

PSCredential remoteMachineCredentials = new PSCredential(domainAndUserName, securePassword);

 Step 2: Constructing WSManConnectionInfo

This is the hard part. Mostly because the documentation on how to use this class is so poor, most of it consisting of how to establish a loopback connection to localhost.

With our particular goal in mind, we only really care about one of the overloads:

public WSManConnectionInfo (
bool useSsl,
string computerName,
 int port,
string appName,
string shellUri,
PSCredential credential

The documentation for this constructor (located at if you want to read it yourself) has such gems as


The application end point to connect to.

Not very useful.

Lets go over these parameters one by one and look at what each one means.

useSsl – Pretty simple, a bool flag indicating if SSL should be used. Highly recommended for use, of course. Note that changing this changes what port number you will be using later on

computerName – The name of the machine you are connecting to. On local networks, this is just the machine name.

port – Thanks to the blog post at we know what port numbers PowerShell remoting uses. 5985 if you are not using SSL, and 5986 if you are using SSL.

appName – This should be “/wsman”. I don’t know what it is or what other values you can use here, but thanks to the Emeka over on the MSDN forums (thread: we know that “/wsman” will get it working.

shellUri – Again, thanks to Emeka we know that this needs to be;. I am not sure what other values are acceptable, but that value does indeed work.

credential – Finally we come back to the PSCredential object we constructed earlier. Simple enough.

Step 3: Creating a Runspace

We thankfully return back to documented things. After constructing the WSManConnectionInfo just pass that into RunspaceFactory.CreateRunspace(…) and be done with it.

Step 4: Putting it all together

string shellUri = "";
PSCredential remoteCredential = new PSCredential(domainAndUsername, securePW);
WSManConnectionInfo connectionInfo = new WSManConectionInfo(false, remoteMachineName, 5985, "/wsman", shellUri, remoteCredential)

using ( Runspace runspace = RunspaceFactory.CreateRunspace(connectionInfo) )



Pipeline pipeline = runspace.CreatePipeline("<COMMAND TO EXECUTE REMOTELY>");

var results = pipeline.Invoke();


All in all, not too bad. The sad part it, in terms of figuring out what to do it is easier to create a local runspace and use Invoke-Command on it, but once you know what you are doing, it is only one extra line of code to execute scripts remotely!

September 21, 2011

PowerShell Tip: Building a command line for execution

Filed under: PowerShell, Programming — Tags: , , , — Devlin Bentley @ 4:59 pm

Update: This still works best for some scenarios, but learn about how to use the call operator to do this and work within the system!

So you want to build up a command line to execute some utility. Simple enough right? Lets use ping as an example. Say you want to ping 5 times. Your parameters would look like -n 5. Append this to the end of ping in PowerShell and off you go, right?

$pingopts = " -n 5"
ping $pingopts

Run that in PowerShell and you will hit a small problem. Ping will give you an error saying about not being able to find the host -n 5.

If you examine the command line that is used to execute ping (pick your favorite tool, I chose process monitor!) what is happening becomes quite clear. The command that was executed was this:

"C:\Windows\system32\PING.EXE" " -n 5"

The problem is that the string quotes around $pingopts were kept in place. While keeping quotes is useful when passing paths around it is not what you want in most other circumstances. We need to make those quotes go away and thankfully you can use invoke-expression to do just that.

$pingopts = " -n 5"
invoke-expression "ping $pingopts"

This code will work perfectly! If you know of any other solutions please post about them, I am sure there are many ways to solve this problem in PowerShell!

In summary: Generally when working with Powershell Cmdlets you don’t have to worry about strings, PowerShell and its cmdlets handle quoting and dequoting them them perfectly, but you have to be careful when interfacing with the non-PowersShell world.

Older Posts »

Create a free website or blog at