Unity still has its roots in mobile game development, and it reminds you of this fact in peculiar ways. One of the first things I did in my Unity project was bind CTRL to affect mouse behavior (for click and drag movement) and bind 'Z' for changing view zoom level. As I played my test game I discovered, to my horror, that Unity Editor was executing “Undo” actions in my scene even though my game window was selected. Fortunately changes to the scene are transient while running the Player, and everything goes back to normal when play stops. Though apparently, you may not be so lucky if you type CTRL+N which may (or may not – I didn’t confirm this) wipe your scene from existence.
This sort of keyboard binding is pretty standard for PC games, but Unity is totally incapable of dealing with it out-of-the-box. And little wonder: it’s an entirely alien way to implement user input for mobile games. You simply never even think about conflicts of keyboard controls within a game when you’re a mobile games developer. I’m a desktop PC developer (at this moment!) so I need a workaround or fix.
I evaluated a handful of suggested workarounds and one GIST’d script meant to solve the issue. None were to my satisfaction, so I wrote my own.
Usage
In order to use this gist, you will need to create a Player profile in your Unity Shortcuts, via menu item Edit->Shortcuts:
(I strongly suggest leaving CTRL+TAB bound, to allow for quick nav between player and editor)
… and then running the game from Player, you should see this console output as your player gains and loses focus:
Blogging. Every so often I take up the torch. And every time I do, enough years pass in between that everything about the best way to blog changes. More or less.
First up, might as well make an effort to achieve high visibility – which means using WordPress to hook into all these “social media platforms” so that my blog post can be broadcast to as many bored eyeballs with active fingers as possible. The first item on that list, Facebook. Easy!
Wrong.
Facebook only lets WordPress post to Facebook Pages, which are basically facebook’s fancy new digital-age version of the classic local-governing registrar of “doing-business-as.” Fine. I created a page, and called it:
Our Mission: develop a unifying theory of four of the greatest tools in humanity’s great sociopolitical experiment: Capitalism, Socialism, Oxymorons, and Alliterations.
The Corporate Communist
Awesome.
WordPress/FB still complains. Why? I can’t post to the Corporate Communist because the Page is marked as Private. Like, yeah. I might want to try this first — in private — before putting it out for all the world to see. Apparently FB doesn’t allow this. Clearly, Facebook isn’t too jazzed with the idea of WordPress using their platform to monetize WordPress users and the WordPress business ecosystem.
OK, Made it Public. Success. Extra Awesome.
Conclusion: WordPress is finally allowed to perform CTRL-C + CTRL-V on my behalf to Facebook. I think I will need to post at least 20 times for me to get a Return on Investment (ROI) on the time I sunk into creating this page vs. just manually pasting my blog URLs into my FB status. If the last five or so attempts at blogging are an indication of trend, I’ll not reach that ROI break-even point.
Next, I wanted to write a post to my new page, The Corporate Communist, from inside Facebook. Only because I thought it would be a nice baseline task, ahead of the (supposedly more complicated) process of WordPress posting on my behalf.
Wrong again.
Long story short, I still can’t figure out how to do it. There’s new facebook and classic facebook, and different instructions for both, and different UI depending on if I’m trying to post as the Corporate Communist to a friend’s feed, or trying to post as the Corporate Communist to the Corporate Communist page itself (the former is much more obvious than the latter).
Moving on, have WordPress post for me. And hopefully it works. And since I wasn’t afforded the basic option of verifying this privately, I feel like a speaker on a stage who gets up to the mic and taps…
I was evaluating a nifty looking lightweight Javascript interpreter called DukTape. It has a set of performance metrics, the first of them being Octane. I google’d Octane, never having heard of it before. My first hit was this: https://v8.dev/blog/retiring-octane
The short summary of that page is pretty simple. Basically, Octane iswas a benchmark for a programming language (Javascript), and just like every other language benchmark ever made, it turned out to be minimally useful — detrimental even because it tested for things no real programmer would ever do, and to optimize for those things would mean generating worse code for real software. Google stopped using Octane early, and then released this blog a couple years later when they realized too many other people were treating it like something useful.
First, the blog reads like the same old sob story I’ve read about so many other short-lived compiler benchmarks — they result in over-optimization for the benchmark and hurt real software in the long run.
Second, why people feel compelled to turn such simple conclusions into extremely long-winded blog posts, I’ll never know. There’s a certain amount of verbiage dedicated to them defending the benchmark, pointing out the couple of narrow-scope items it apparently did help with. Though I have a hard time buying it based on comments made at the end of the blog. Let’s review quickly:
By 2015, however, most JavaScript implementations had implemented the compiler optimizations needed to achieve high scores on Octane.
So it had an effective lifespan of three years. That’s not very good for a benchmark, even in the long history of short-lived programming language benchmarks. Also interesting is that this blog was written in 2017, more than two years after Google had already discontinued using it internally. So Google released a benchmark, and then while everyone else was busily trying to “be like Google!”, Google internally was going in a completely different direction. How every Microsoft-like of them. Just another interesting example of how being a large and successful corporate entity can result in competitive advantage even when you’re meaning to be helpful.
Octane helped engine developers deliver optimizations that allowed computationally-heavy applications to reach speeds that made JavaScript a viable alternative to C++ or Java.
Wut? First, Java even isn’t a viable alternative to C++ for most “computationally heavy” applications so I have no idea how those could be mentioned together in the same object of the preposition of that sentence. Any quick browse through StackOverflow for the millions of questions about horribly slow Java performance for simple data algorithms is proof of that. Second, JavaScript on a good day doesn’t hold a stick up to Java or C++. Ok, sure, if you have a simple loop iterating over a fixed array, all three are kinda going to give you a similar benchmark result, because that’s so easy to peephole-optimize for. And if you have a lot of complex string operations, all three are going to give a similar perf profile because most of the bottleneck is in memory allocation and executing internal natively-optimized libraries. And sure JavaScript and Java do better than C++ for memory allocation, but only because they defer overhead to the Garbage Collector later on, which brings us to:
In addition, Octane drove improvements in garbage collection which helped web browsers avoid long or unpredictable pauses.
Fair enough. Garbage collectors are notoriously hard to debug and improve in real-world applications. It definitely pays to have a set of stress-tests for various memory-pounding patterns. But I’m kind of suspicious that if this test were so useful, why wasn’t it split out and offered separately from the Octane Suite?
Please also take note: this “unpredictable pauses” thing wouldn’t even be a problem if they weren’t using a GC memory model in the first place. This highly volatile random-pause problem is a side effect of trying to get the perf up on the lazy execution of object-boxing and mutable string operations. It’s that simple. These sort of problems only happen when you have a GC-based heap.
The next frontier, however, is improving the performance of real web pages, modern libraries, frameworks
No kidding? That should be the first thing you do, not the “next frontier”. With the exception of a garbage collector, the best way to optimize any compiler or interpreter is by benchmarking real-world applications and optimizing for their hotspots. Maybe 20 yrs ago when JavaScript was still new, the idea of benchmarking the real-world might not have been a good idea. But that ship sailed more than a decade ago. Octane as a “benchmark” was obsolete before it even came out, and the V8 blog making a point to try to defend it only serves as more proof to that end.
By way of example, one of the best ways to benchmark a garbage collector is to take real-world JavaScript applications and:
change the GC parameters so that the GC either has very little heap or a whoooole lot of heap
Feed the application an excessively large data set, which is something that can usually be done pretty easily even with web apps you didn’t write.
Front-end JS apps are readily available to be manipulated simply by way of them being downloaded in source-code form and run on a client. Server-side JS apps are only slightly trickier, but a vast majority have been built on top of the 100% open source NPM ecosystem, so there’s still plenty of real-world applications to optimize from. Every compiler should exhaust these options before turning to benchmarks.
An engineering choice made by Microsoft around the time of the release of Visual Studio 2015 was to explicitly requirethe Windows Platform SDK version to be specified by each and every C++ project. There is a reason for that choice, and it’s both rather lengthy and specifically to do with UWP. If you’re curious, read later in the entry.
The odd part is that there’s no default fallback setting. If your project doesn’t request a specific WindowsTargetPlatformVersion then it’s an msbuild error. There’s no built-in logic to fall back on whatever SDK the developer might have installed, and there’s no msbuild property that’s initialized with whatever SDK the developer might have installed. As a result, every project is forced to define this very specific WindowsTargetPlatformVersion and any collaborating developer who attempts to build the project must have precisely that version Windows SDK installed. This causes a lot of unnecessary headache among developers since, as far as Windows Native application development is concerned, this very specific SDK version stuff just doesn’t matter.
Query the Registry!
I admit that I was never a fan of querying the registry from an msbuild project. In general I find the registry to be dangerous and often littered with bogus or out-dated entries (a side effect of it being an entirely separate entity from the programs themselves and also well-hidden from the user). But since Microsoft has neglected to provide us this information on their own, I’m left with little choice. So here’s what I came up with, which I save to a property sheet called SetWindowsTargetPlatformVersion.props:
This snippet is designed with two features in mind. First, the selection of Windows Platform SDK can be made via the environment variable VC_WIN_SDK_VERSION, giving the developer full control over SDK usage where required (also useful for automated build slaves/nodes). Second, the snippet provides a fail-safe in the event the registry key isn’t found for some reason, and falls back to a default SDK version. I arbitrarily picked 10.0.16299, the latest installed version at the time I wrote the snippet.
Alas there’s some bad news. This props sheet only works if you make sure to include it at a very specific place in your vcxproj project files. You cannot include it via the Visual Studio Properties Manager UI, because msbuild will throw it’s error long before those sheets are even included. You need to edit the vcxproj by hand and import SetWindowsTargetPlatformVersion.props before Microsoft.Cpp.Defaults.props is imported, like so:
Modify your vcxproj accordingly
This ensures that the WindowsTargetPlatformVersion property gets set long before msbuild tries to use it.
And in the end our goal is finally accomplished: Happy open source dev’ing of Windows Native applications with Visual Studio 2017.
Windows SDK is in the Registry32 Namespace
Curiosity killed the dev, and I dared to ask myself a question: What’s going on with the RegistryView.Registry32 part? The whole 32 or 64 bit registry thing is a throw-back to some seriously impressive hoops designed by Microsoft to allow 32 bit and 64 bit versions of the same application or shared library to be installed on a system and behave in an “expected” manner. The short of it is that an application’s registry key signature changes depending on if an application classifies itself as 32 bit or 64 bit. For legacy 32 bit apps this is fine. For the Windows Platform SDK — which is a set of libraries and tools targeting several platforms including 32 and 64 bit Windows — the idea that it’s classified as either 32 or 64 bit seems like nonsense. Microsoft has decided to classify it as a 32 bit application and, probably, it’s very safe to assume that won’t change.
Extra Reading: The Strictness of UWP, Imposed on All
So why did Microsoft decide to force the Windows Platform SDK Version on us in this manner? There is a reason for that choice, and it’s both rather lengthy and specifically to do with UWP. The change to require this setting coincided with the integration of the UWP SDK in or around Visual Studio 2015. The UWP SDK promises two things:
a rapid update and release cycle
strict version control checks at build and at runtime
By strict checks, I mean that a UWP app built against a newer SDK is blocked from being allowed to run on an older version of Windows. In practice this shows up in the Windows Store when you try to upgrade an application and then it tells you something along the lines of:
You must upgrade Windows to install this application.
That happens when the UWP SDK version used to build the app is newer than the version of Windows running on the computer. The rule of thumb goes that newer SDK typically has some new features and APIs that would result in runtime failures on an old operating system. Software built on older SDKs is allowed to run indefinitely on the premise that existing APIs retain compatibility. There’s also some package security and encryption benefits to this strict version check rule. None of this is unique to UWP. Android, iOS, Playstation, XBOX, etc. all have similar strict SDK versioning requirements.
The frustrating part comes from how Microsoft decided to impose that on everyone, all at once, and without some built-in means to manage it from a developer or build system administrator point-of-view. Windows Native application developers don’t need to care about this strict version specification, and build system admins typically want to specify these type of SDK things from build/CI interfaces and not via Visual Studio.
I came up with a tentative hack for allowing shebangs in .bat / .cmd files invoked from MSYS2. The purpose of this hack is to allow running batch files directly from an MSYS2 / MinGW shell. Normally that’s not permitted because the shebang syntax is invalid windows batch file syntax. Here’s an example of how we would normally specify a shebang:
#! /bin/cmd
:: This is a batch file, with batch file looking things like this:
@echo off
SETLOCAL
echo Success!
On Linux/MSYS, the hash (#) is widely supported as a comment character. But not so much on Windows. Running this batch file from MSYS throws an error:
$ ./batch-test.bat
'#!' is not recognized as an internal or external command,
operable program or batch file.
As a workaround though we can create an empty file by the name of #!.bat:
It should have printed Success!but all we got was this:
C:\Projects\test>#! /bin/cmd
… and right back to the prompt, with no errors or anything. What happened? It goes back to stupid rules of Windows Batch files: When executing another batch file, you must use the CALL keyword to have control return to the initial batch file. Let me re-iterate that in a bold quote box, only because it’s so surprisingly unexpected behavior by modern programming and software engineering standards:
When executing another batch file, you must use the CALL keyword to have control return to the initial batch file. If you do not use the CALL keyword, then the current batch file is completely unloaded and replaced by the target batch file!
So in my specific case here, what happens is #!.bat is loaded and run (which does nothing), and then CMD.EXE says “job well done!” and exits without returning control to batch-test.bat.
We can’t prefix our shebang with CALL however, so the only solution left is to create a dummy #!.exe instead of a dummy #!.bat. That will circumvent the silly CALL requirement.
But even then there’s a drawback: the syntax of the shebang must always contain a space between the ! and the / or else Windows will treat it like a directory path. Meh.
So by this point the whole thing feels like way too much work. Cleverness be damned, and time to move on to more practical problems with more practical solutions.
I’m in the process of making my first Chocolatey package, so I create the template and start sifting through the numerous generated template files. Here’s VERIFICATION.txt:
Note: Include this file if including binaries you have the right to distribute.
Otherwise delete. this file. If you are the software author, you can change this
mention you are the author of the software.
===DELETE ABOVE THIS LINE AND THIS LINE===
VERIFICATION
Verification is intended to assist the Chocolatey moderators and community
in verifying that this package's contents are trustworthy.
<Include details of how to verify checksum contents>
<If software vendor, explain that here - checksum verification instructions are optional>
… and I quote:
Verification is intended to assist the Chocolatey moderators and community
in verifying that this package’s contents are trustworthy.
How is a checksum pasted into a plain text file supposed to do anything to verify a person? If I’m someone intent on spoofing someone else’s package, I can just copy/paste whatever’s in that verification block into my own spoof package and now it’s “verified“. The only way security of this sort is actually effective is if it’s part of a root-authenticated chained certificate process, such as the SSL / TLS / https protocol stack, which uses Verisign and the like. The details of this SSL process are not trivial. Any sort of verification signature process that is trivial is almost certainly subject to man-in-the-middle hijacks. Furthermore, even a robust security like SSL is only meant for in-flight connection verification — it doesn’t really work very well for situations like packages that are just sitting around for weeks that people can inspect and spoof in all manner of ways.
In some cases it can be effective to have checksums published on trusted websites that users can cross-reference. A common case for this is downloading GNU software. In many cases GNU software links to 3rd party download services and open source mirror sites, that means there’s a vector for spoofing. The checksum on the website should be checked against the checksum of the downloaded package (but of course almost no one does this). That doesn’t apply to Chocolatey. People aren’t downloading Chocolatey packages from GitHub links or websites. They have no frame of reference for a chain of trust other than the Chocolately package database itself. Chocolatey’s package information screen tells them what the package website is. If the package is spoofed, then that website information can be spoofed too, giving a URL to some GitHub site that’s not actually me. How is anyone to know otherwise?
You’re Good Ash, and I’m Evil Ash
It’s like one of those scenes in a movie where you have some person and their doppelganger standing side-by-side, and someone else is tasked with deciding who’s the real one and who’s the impostor. Website URL registrars work well-enough because they’re a persistent name that we can associate long-term trustworthiness to. I trust the information on GNU.org or Facebook.com for the following reasons:
it qualifies as a trustworthy entity via longevity, brand value, and if it was malicious it would be an easy target for law enforcement
DNS names are pretty static and are only up for change every few years usually, so once it’s locked in as a trustworthy entity, we can feel safe that it’s not going to be different tomorrow on a whim
because spoofing domain names is really hard now (thanks to SSL / https)
Uploading a package to Chocolately isn’t really afforded these luxuries. The only chain of trust available is the Chocolately account that was used to upload the package, and that is bound to my unique email account. The packages I upload to Chocolately are precisely as secure as the password on my email account and my two-factor phone authentication, and if I’m being spoofed because those things were compromised then the only way to know is to ask me questions only I would know — in private where my spoofing doppelganger can’t hear my response, and that’s ideally not via my email since that’s likely compromised — and hope I respond correctly. No amount of checksums or website URLs pasted into the package itself can change that.
-Properties + Provides the ability to specify a semicolon ";"
delimited list of properties when creating a package.
Unnecessarily verbose: “Provides the ability to”
Of course it provides an ability to do something, or the ability to affect some behavior. That is what every option or switch does.
Indicative of bad software design: “when creating a package”
This bit is actually necessary even though it shouldn’t be. 90% of the use cases for NuGet involve creating packages. Another 5% involves generating .nuspec from scratch or from existing .csproj, and properties don’t have any meaning in that context. So based on that 95% use case, “when creating a package” should be safely assumed rather than explicitly stated.
Unfortunately, there’s the last 5% where NuGet can be used to build project files according to .nuspec. That seemingly innocent helper ‘ability’ causes a spiral into a painful situation where this -Properties switch becomes dangerously ambiguous, because:
It’s a very common practice in larger projects to have special build properties specified to msbuild via command line.
NuGet does not provide a way to specify properties to msbuild.
… ergo, people looking to use this out-of-place -Build option in NuGet may very well expect -Properties to pass properties into msbuild. And that’s why the clarification is unfortunately required and, sure enough, most of NuGet’s options have extra text at the end indicating whether they affect build or package steps.
The idea that NuGet should be building anything doesn’t even make sense. It’s a responsibility that doesn’t belong within the scope of NuGet. I’m sure someone thought it was clever and adept at saving them a few key strokes or having to write a quick shell script… but it has extremely limited scope of features compared to invoking msbuild directly, comes at such a cost-of-complexity for everything else NuGet does.
Namspacing to the Rescue
That said, there would be a couple of possibilities to help clarify things without removing the build-step feature entirely. For one, NuGet could have sectioned switches into namespaces. Anything that affects the build step should be prefixed with -build. In that way it becomes more clear what switches affect which parts of the NuGet logic pipeline:
-Properties
-Build-Properties
An even better option would have been to not have this -Build option under pack at all, and have a qualified nuget build command instead, with it’s entire own set of scoped options.
Chocolatey is a CLI package manager for Windows. It doesn’t have a basic installer — due to some idealistic belief system — so I decided to be snarky and make one myself:
I’m a big fan of Chocolatey. It offers a brand of CLI fan-service on Windows OS that hasn’t really been available previously. But it’s not without some blemishes. Chocolatey follows a development paradigm called a DirectingAttitude. This means that the software developer(s) has a set of idealistic views that they directly impose on the user. Chocolatey likes to impose this worldview immediately at the point when you try to install the software. The installation instructions at the time of my writing this blog look something like this:
… clearly that’s a bit of work and a roundabout process. And that bit at the end about “safety” probably deserves a blog post all to itself. So why in the world is this software so oddly difficult to install? Fear not, there’s a reason:
… and then it continues on a rant about how using Powershell to install the program ensures that Powershell is installed, and that’s why this is such a brilliant install process. But here’s the thing: an installer — such as an MSI or the EXE I’ve posted above — can perform the same check and report the same error if Powershell is missing. So clearly this can’t be the reason for all this hooplah.
Eating Its Own Dogfood
The manifesto ends with the following bits, which I’m pasting as quotes since Chocolatey’s website currently has some broken CSS layout that makes taking a screenshot of the message difficult:
The installation actually ensures a couple of things:
PowerShell is installed and is set up properly.
[.. lengthy rant about Powershell’s importance omitted…]
You are open to doing things in a slightly different way, e.g. working with packages as opposed to installers.
Some folks might say this means we are asking folks to learn to ‘do things “our way” because we know better’. It’s less about “knowing better” and more about learning that Chocolatey does things in a slightly different way. It does that because the world of software is not just installers. Software goes beyond Programs and Features and a system that can track all of that also needs to as well. Package management is not a new concept in the world of software, perhaps just newer to Windows. If folks are not open to that, then they are probably not going to be open to Chocolatey. And that’s completely fine. Chocolatey is not for everyone. We may eventually get to more of a masses approach. Right now we are targeting a specific type of audience – those that are looking for better ways to manage software on Windows and open to looking for the best process of doing that.
We already clarified that the first one about Powershell is nonsense. That just leaves the second, and it very quickly dives right into Directing Attitude and is even complete with a self-justification that they’re not directing the user because they know better, but because they want us to learn something. See? They’re not doing us a service because we’re dumb, they’re doing it because we’re uneducated. In this case, apparently we need to learn how to open a Command Prompt and paste text — because somehow that “reflects” on the way Chocolatey works, in some “educational” fashion.
FYI, there are so many things on the internet that tell you to copy some text and paste it into a command prompt. I’ve known plenty of people who have done all kinds of horrible things to their PC by doing exactly that. I assure you, no one learns anything by opening a command prompt and pasting text — except perhaps how to type the word command and how to hit ctrl+V.
Did you learn anything about your PATH environment variable? Did you learn how or why that’s critically important to the entire concept of Chocolatey, and why it’s the #1 thing that sets Chocolatey apart from classic Windows software installations? No? Don’t feel bad. I’m not even sure Chocolatey quite realizes that, yet. They seem to think what makes them special / unique / useful is that they don’t have a GUI.
And so I made a chocolatey installer — meant for the rest of us who know how to both use a command prompt and how to enjoy the luxury comforts of software aids that make our lives easier too.
I like Chocolatey! But it has a rather dodgy idea of software safety. Here’s an excerpt from it’s unnecessarily convoluted install instructions — we’re mostly interested in the last part:
Safety First, Unless You’re Swimming on the Internet
From the image of the website, here’s the excerpt below we’re interested in. Please feel free to open the link and inspect it (hint: I’m a veteran engineer and it’s hard for me to parse what’s happening):
NOTE: Please inspect https://chocolatey.org/install.ps1 prior to running any of these scripts to ensure safety. We already know it’s safe, but you should verify the security and contents of any script from the internet you are not familiar with.
Alright, so the key to safety is opening that script and learning what it does, and making sure it doesn’t do anything evil like DELETE COMPUTERor UPLOAD CREDIT CARD INFO Great. I searched for that and didn’t find anything. Let’s run it!
Oh snap! It downloads 7za.exefrom their website. Well that’s a damn scary executable name if ever I saw one. And now just verifying this script isn’t really enough. We need to verify all the other programs that it downloads in the background. Any one of these programs could be malicious. So let’s search for Download-File and verify them one-by-one. This sounds like work, but thankfully there’s only two:
Write-Output "Getting Chocolatey from $url."
Download-File $url $file
Damn. That got complicated. So now we need to see what $url is defined as… and…. that’s quickly gone out of scope of this blog. Long story short, the $url could be from the command line parameter, it could be derived from any one of three environment variables, or it could be this:
And that’s where I stop. Good luck figuring out what it’s actually downloading and running, and expect a script several times more complicated than this one on the other end. Don’t you feel secure now?
I could do a whole running series on all the ways Powershell doesn’t make sense. Mostly I don’t use Powershell, and mostly no one I know uses it either, except now we’re all forced to tolerate its presence whenever we use NuGet or Chocolately (which is essentially an extension of NuGet).
Moving on – so what happens if you want to get the current version of Powershell installed on your system, or a user’s system you’re writing a CMD script for?
C:\Users\jakes>powershell -version
Missing argument for parameter version.
Nope, that’s not it.
PowerShell[.exe] [-PSConsoleFile | -Version ]
[-NoLogo] [-NoExit] [-Sta] [-Mta] [-NoProfile] [-NonInteractive]
[-InputFormat {Text | XML}] [-OutputFormat {Text | XML}]
[-WindowStyle ] [-EncodedCommand ]
[-ConfigurationName ]
[-File ] [-ExecutionPolicy ]
[-Command { - | [-args ]
| [] } ]
PowerShell[.exe] -Help | -? | /?
-PSConsoleFile
Loads the specified Windows PowerShell console file. To create a console
file, use Export-Console in Windows PowerShell.
-Version
Starts the specified version of Windows PowerShell.
No version information in the help output either, which is really highly unusual. And at least now we know what the -Version is. Turns out you have to use $PSVersionTable.PSVersion to get version info. Fine, but then things keep getting weirder. I submit to evidence these, all performed from a vanilla CMD prompt:
C:\Users\jakes>powershell $PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
5 1 16299 98
Woops, I guess not. The Powershell located in the v1.0 dir is apparently still PS5, which was installed by my Visual Studio and has a build number matching my Windows SDK version (16299). So, let’s try this -version thing:
C:\Users\jakes>powershell -version 1.0 $PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
2 0 -1 -1
C:\Users\jakes>powershell -version 2.0 $PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
2 0 -1 -1
C:\Users\jakes>powershell -version 3.0 $PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
5 1 16299 98
C:\Users\jakes>powershell -version 4.0 $PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
5 1 16299 98
C:\Users\jakes>powershell -version 6.0 $PSVersionTable.PSVersion
Cannot start Windows PowerShell version 6.0 because it is not installed.
… well at least the last one makes some damn sense.
A Conclusion?
As far as I can tell, there’s two versions of Powershell: version 2.0, and version whatever-is-latest. And they’re all installed in the entirely misleading directory location %SystemRoot%\System32\WindowsPowerShell\v1.0