Phil Zimmermann Is At It Again October 19, 2012
Posted by Peter Varhol in Software tools, Technology and Culture.Tags: PGP, security
add a comment
I am old enough to remember when Phil Zimmermann released Pretty Good Privacy, or PGP, as open source, circa 1991. I followed his strange but true legal travails with the US government for several years, in which he was under investigation for illegal munitions exports (PGP encryption), yet never arrested. It was only after three stressful years that the US government concluded, well something, and told him that he was able to go about his business.
Now there is a mobile app called Silent Circle that employs the same encryption on a phone, for voice, email, and text. PGP remains, well, pretty good, with an awful lot of computing horsepower and time required to break it.
PGP employs public/private key encryption. I have a key to encrypt, which is public. Only my trusted partners have the private key, to decrypt. The keys are typically 64-bit (old), 128-bit, or more.
I’m also old enough to remember the Clipper chip, a microprocessor that had embedded strong encryption for communications purposes. The catch was that the chip was designed by the NSA, and while the encryption was valid, it also included a “back door” that enabled the US Government to tap into it (purportedly with a court order, though I have no doubt that it could be done otherwise, for purposes other than criminal prosecution). The effort failed miserably, as computer and phone makers declined to use it, and other parties railed against it.
The Clipper chip was obviously ill-conceived (though, oddly enough, apparently not to the government). But I am in favor of law enforcement, though without the spectre of big brother government. These trends will always conflict, and it is right that they do so. Still, it is also right that freedom of, well, speech win out in this argument, even in the face of criminal activity. Let us find a different way to catch our criminals.
Is Impatience a Virtue? September 7, 2012
Posted by Peter Varhol in Software tools, Technology and Culture.1 comment so far
I grew up in a household of very limited means. If I wanted something beyond the basics, I had to save my paper route (do those still exist?) money, do odd jobs for pocket change, and in general deprive myself for weeks or months until I had the funds necessary. I waited, somewhat patiently, until a desired goal was within my grasp. Such an upbringing probably contributed to my not getting caught up in the credit economy, and coming out of the economic shocks of the last decade or so relatively unscathed.
But there are technology trends favoring impatience. Thanks to the speed and ubiquity of Google, we have access to information that in the past may have been completely unavailable, or at least would have required hours or days of research in the local library.
Now Evan Selinger makes the claim that tools such as the iPhone Suri are turning impatience into a virtue. When we want to know an answer, we ask Suri. We may not even trust our own senses, instead preferring to ask the one who has all of the information (“Siri, is it raining outside?”).
He quotes MIT Research Fellow Michael Schrage (who was the only columnist worth reading in Computerworld circa early 1990s) as saying “How would you be different if you regularly had seven or eight conversations a day with your smartphone?”
I’m not calling any of this a bad thing. We have the tools to be more knowledgeable and informed individuals, which may make us better consumers, better citizens, and more tolerant of other points of view. Technology that aims to please ultimately makes it more accessible to more people. These are generally good outcomes.
But it is different than the way we functioned in the past, and may have implications to our daily lives, from how we process information to how we make decisions. I’ve always believed that no decision should be made before it had to be made, so that we can watch how information played out over time. Having information so seamlessly available may mean that we’ll think we know more than we do, and make decisions more quickly. That may not be the best outcome.
How Do You Marry Java and .NET? April 13, 2012
Posted by Peter Varhol in Software development, Software platforms, Software tools.1 comment so far
Years ago, I did some work for Mainsoft, which had a technically cool way of running .NET code on Java application servers. This involved dynamically cross-compiling .NET into Java. The idea was that you could create your application (typically a web application) using Visual Studio, then with a minimum of effort, configure it to run on a JVM and application server.
I would provide a link for Mainsoft, except that the company has changed names and markets (www.harmon.ie). Apparently it wasn’t a good enough idea for a company to make money from. Part of the problem was that there were two paths to getting .NET to run on Java – you either did byte code translation, or your implemented some or all of the .NET Common Language Runtime in Java. Mainsoft did mostly the first, but also found that it was easier to use Mono classes for a lot of the CLR.
But now there seems to be a way to do it in the opposite direction; that is, running a Java application in .NET.
Mainsoft’s strategy was easily comprehendible but rather niche – developers were experienced in .NET and wanted to use the best .NET development tools possible, but the enterprise wanted flexibility in deployment.
IKVM, an open source project, uses some of the Mono classes to enable Java code to run on .NET. It also includes a .NET implementation of some Java classes. I’m not sure why Mono is needed in this case (and in fact, it’s likely that the Mono project has largely run its course). IKVM lists three components to the project:
• A Java Virtual Machine implemented in .NET
• A .NET implementation of the Java class libraries
• Tools that enable Java and .NET interoperability
Microsoft was promoting IKVM during its language conference, obviously as an existence proof for the concept that there may be people who are interested in porting from Java to .NET in this manner.
Still, as a practical matter, it doesn’t seem worthwhile doing. There are plenty of JVMs for Windows (I realize that .NET is a subset of Windows, but as a practical matter is pretty well tied to it). The distinction of running on .NET is one that most won’t bother to make. Perhaps someone out there can offer an explanation.
In Memory of Dennis Ritchie October 14, 2011
Posted by Peter Varhol in Software development, Software platforms, Software tools.1 comment so far
I woke up this morning to the news of the passing of Dennis Ritchie, computer science researcher at AT&T Bell Labs and inventor of the C programming language (he actually passed away last weekend). The C Programming Language, a language manual by Brian Kernighan and Dennis Ritchie, was the bible for several generations of professional programmers.
The early 1970s saw the rise of multiprocessing, where multiple users shared the same computer in a timesharing arrangement. The research state of the art at that time was MULTICS, an MIT research project. Bell Labs took those concepts and Ken Thompson developed the Unix operating system (with Ritchie’s assistance). Ritchie then designed the C language as a high-level language married closely to Unix and its underlying API and commands.
However, C was also created as a platform-independent language, a nod to the fact that Unix would eventually be ported to dozens (probably more like hundreds) of different processors and hardware architectures. For example, it lacks a string library, because strings are implemented differently on different OSs. So C programmers got used to doing an array of char to represent a string (third parties eventually came out with custom string libraries for different computers).
C has elements of both a high-level language and a systems programming language. It had high-level constructs, but could also directly access memory locations through pointers. It does no automatic allocation or deallocation of memory; malloc and free are among the first constructs learned by aspiring C programmers. Further, C does essentially no type-checking; programmers could essentially copy data from one type to another, irrespective of the type size, at their own risk.
Functions are generally called by reference, by establishing the memory location of the function (called a pointer), and are called by referring the calling function to that location (called dereferencing the pointer). This can make possible some extremely convoluted programming constructs.
These characteristics and others made C extremely flexible, but also extremely prone to programming errors. When I was the BoundsChecker product manager at Compuware NuMega Labs, we determined that a large majority of C (and its object-oriented extension C++) programming errors were memory errors. It is simply too complex for most C/C++ developers to fully understand and control how they are using memory.
C programs eventually became so unmanageable that many development teams now use managed languages such as Java or C#. Both languages (as well as niche languages like Lisp and Smalltalk) automatically allocate memory when you define and use a variable, and reclaim that memory when there are no longer any links to it through a technique called garbage collection. But many commercial applications still use C/C++, either for legacy or performance reasons.
I was a C programmer for a brief period of my career, and occasionally taught C++ as an academic. During my time as an academic, I wrote a discrete event simulation application in Pascal (invented by Swiss computer scientist Nicolas Wirth), a similar language that provided much stricter type checking. Despite the popularity (and to large extent necessity) of managed languages today, I still firmly believe that you can’t truly understand how to program a computer unless you have a clear picture of how your code is using memory. And we owe that view of memory to Dennis Ritchie and C.
Wintel for the Smartphone Crowd November 14, 2010
Posted by Peter Varhol in Software tools.add a comment
Wintel is the mashup term for the duopoly of Windows and Intel, dominant for so many years of desktop computing. While it happened largely by accident, Intel processors and Microsoft Windows operating systems employed a loose partnership that powered a very high percentage of computers using the PC-standard architecture.
This article postulates a similar duopoly of Qualcomm and Android. Qualcomm is a principal maker of CDMA chipsets for phones (the alternative is GSM, used by much of the rest of the world), while Google’s Android is an open source operating system for phones and perhaps tablets and other small form factor devices.
At first the comparison seems lame. Phones use a variety of different processors, none of which are made by Qualcomm. The communication chipsets may or may not have the same impact as the processor. They don’t drive application compatibility, so I would argue that they aren’t as important as the CPU.
Further, Europe and Asia are not going to convert to CDMA, so this partnership will not become a global standard. It really only applies to the US, and more specifically to the Verizon network (my own carrier, US Cellular, is also CDMA, and makes use of much of the Verizon network, so my phone is also CDMA).
But to someone who was there in the early days of the Wintel story, the parallels seem more apparent. Up until the mid-1990s, it was by no means assured that Wintel would be as dominant as it was. Unix (not Linux until later) was the only high-end desktop operating system, and Alpha, MIPS, and POWER processors were for those who needed the horsepower that Intel couldn’t provide. It’s worthwhile noting that when Microsoft introduced Windows NT in 1993, it included versions for both Alpha and MIPS (as well as Intel x86 and the i860 processor).
Because of the different communication standards used, and the massive amounts of infrastructure needed to support those standards, it seems unlikely that Qualcomm and Android can achieve anywhere near the dominance of Wintel. But the fact that the question is being asked says a great deal about how the phone continues to become the next dominant platform.
Are Domain-Specific Languages the Next Software Engineering Breakthrough? November 11, 2010
Posted by Peter Varhol in Software development, Software platforms, Software tools.1 comment so far
Almost since people have been writing software, we’ve looked for better, more efficient, and more intuitive ways of doing it. First-generation languages (machine code) gave way to second-generation languages (assembly), which was largely abandoned in favor of third-generation languages that are in mainstream use today. Starting with the likes of Fortran and Cobol in the late 1950s, we now use C# and Java, plus a number of other less mainstream but still-important third-generation languages.
That’s not to say that we haven’t tried growing beyond the third generation. For a while in the 1990s, fourth-generation languages like PowerBuilder and Progress attempted to make data access more intuitive. Also in the 1980s, Japanese industry and academia embarked upon a far-reaching but poorly-understood Fifth-Generation Computing project that didn’t have any wide-ranging impact on software development.
And C# and Java represent a significant advance over the likes of C++ and Ada in that they are managed languages. Rather than requiring the programmer to manually write code to allocate and deallocate computer memory, the underlying language platform does it automatically.
But at a conceptual level there’s little else fundamentally different between Fortran and C#. To be clear, there is much that is different, but the language instructions themselves haven’t changed a whole lot. While we have libraries and frameworks that enable us to abstract a bit more today (and doing so may create more problems), we are writing code as the same conceptual level that we did fifty years ago.
The buzz in the industry over the last several years has surrounded so-called domain-specific languages, or DSLs. I’m reminded of this by this article on former Microsoft executive Charles Simonyi, who has since founded a company to create a foundation for implementing DSLs. I’ve also participated in several conferences over the last couple of years where speakers have promoted DSL concepts.
DSLs are an attractive concept for a number of reasons. Because they focus on a specific problem domain, they tend to be fairly simple. Domain experts, rather than programmers, may be willing to adopt them because they abstract a programming problem into terms that they understand, and can build solutions for.
I really like the idea, but I’m doubtful in practice. Fifteen-plus years ago as an academic, I actually wrote a DSL, a visual language for discrete event simulation. I loved it, but even those interested in discrete event modeling were flummoxed at some of the things I did. And these were people who were used to thinking in those terms. Languages meant for specific types of problems have to be designed very carefully (I probably didn’t do that) just in order to appear on the radar.
And I think we’ve debunked the concept of the citizen programmer. I’ve seen a lot of products intended to bring programming to the user come and go over the last twenty years. Microsoft’s original Visual Basic was intended to do just that, but it was successful only because it was useful to professional programmers. While a few domain experts adopted 4GLs and became programmers (I saw that while working at Progress Software), it’s very much an exception.
We prize programming languages in large part for their versatility, not their simplicity or their utility for a specific purpose. DSLs aren’t flexible. Even in a problem domain, you want the ability to draw in instructions and tools from other domains, and from a large toolkit in general. Unless we rethink the fundamental concepts behind DSLs, this will be the next breakthrough that never was.
Patent Fight Over Java Likely Means No Winners September 1, 2010
Posted by Peter Varhol in Software development, Software platforms, Software tools.1 comment so far
One of the commenters in my previous post noted that the lawsuit filed by Oracle against Google was a patent fight, not a license fight. It apparently involves a clean room VM developed by Google that allegedly violates one or more of the existing patents Sun was originally awarded on Java. It is possible to violate a patent, in effect copy a protected invention, even though the logic behind that invention is distributed via an open source license.
I’ve written a lot about software patents in the past. I’ll say two things regarding them today. First, software patents are never as cut and dried as they may appear at first glance, and predicting the outcome is virtually impossible. I don’t know who’s right here, but being in the right isn’t necessarily indicative of who will win the suit. A duplicate of the invention itself isn’t typically sufficient; the steps in creating the invention matter just as much.
Prior art may come into play in Google’s favor. Java certainly wasn’t the first managed platform; Smalltalk and Lisp immediately come to mind as predating it. There were also other clean room Java VMs. A company called New Monics, started by Kelvin Nilsen, did fundamental and applied research into garbage collection, and came out with products supporting a hard real time Java for embedded use (New Monics was acquired by Aonix several years ago, and PERC still seems to be alive).
Second, if this were two ordinary software companies, I would say that the suit would be settled by a cross-licensing of each others’ patent portfolios. Today, tech companies accumulate patents as a defense mechanism, in case of a lawsuit. It is often possible for the defending company to coutersue, identifying technologies from the plantiff that may infringe on its patents. This often ends in a settlement where little if any money changes hands, and both companies continue as they did before.
(Incidentally, since patent trolls don’t make products, this defense won’t work with them.)
However, I suspect that larger issues are at work here that are likely to preclude an amicable settlement.
Last, in Oracle’s favor is that in engineering practice, clean room design is difficult to prove. Companies engaged in clean room design often require that engineers have no prior knowledge of the technology, so that they can plausably argue that any similar design approaches are an accident rather than intentional. In the case of Java, I might doubt that it’s possible to hire such engineers who can still produce the VM.
The loser of this patent fight remains all of us. However this may turn out, it will cause uncertainty as to just what we can do with Java, and how we can do it. For the vast majority of companies building custom applications for internal use, it doesn’t matter directly. But it’s likely that they’ll have to pay extra attention to the tools and frameworks they use, as it may not be clear what is patented, and by who.
Writing Better Code Doesn’t Get You to Perfection March 25, 2010
Posted by Peter Varhol in Software development, Software tools.1 comment so far
I spent a number of years of my professional software career building tools to enable developers and testers to find and fix bugs. As such, I’m cognizant of how errors in coding can manifest themselves much later, and in unpleasant ways. This report from the Pwn2Own hacking contest on the ability of professional hackers to bypass important security features of Windows 7 and execute a successful attack on Internet Explorer 8.
Building commercial development tools is a very difficult business today, as I’ve alluded in a recent post. These tools include debuggers and more general-purpose error detectors, performance analyzers, automated code review engines, and code coverage analyzers.
The vast majority of developers don’t bother with such tools unless they know they have a problem. They represent a fire-fighting technique rather than a methodology for development. So sales are dependent not upon a process, but rather an immediate pain.
That’s not to say that tools can single-handedly solve all of the malware problems that computer users face on a regular basis. I’ve read the entire report of how Peter Vreugdenhil bypassed the Windows 7 DEP (data execution prevention) and ASLR (address space layout randomization), and it’s pretty clear to me that there was no bug or design error in what Microsoft had done here.
Both features represent reasonable responses to common hacking techniques. ASLR moves Windows components around randomly in memory, so that hackers have no guarantee where they can find them to even begin a hack. DEP prevents hackers from intentionally overwriting a data space with executable code.
But they are not guarantees against successful hacking. Hacking involves a great deal of detecting work, an intimate understanding of how code executes in memory and what it does, and good tools to look at every single location in memory and what’s happening there, on a step-by-step basis.
(Disclosure: A decade ago I was the evangelist for a software product called SoftICE, which could interrupt the operating system – Windows or DOS – and look into memory locations and even processor registers. It was the ultimate debugger, and we marketed it as a device driver development tool, but it was also the favorite tool for hackers. Once our corporate masters understood what it could be used for, it was gradually decommissioned.)
But this hack demonstrates once again that there is no completely secure system, except one with no I/O that is locked inside a vault, and is just about useless for any practical purpose. It’s easy to complain about the seemingly vulnerable Windows, or to extol the virtues of alternatives, but much of that is based on the fact that hackers can make their biggest impact by hacking Windows.
And it’s not going to get any better, as long as there are people who see a challenge or profit in hacking. So despite the millions of dollars that are spent on firewalls, anti-virus software, software tools, and the like, computer users still have to be careful on the Internet. Developers can’t build hack-proof software.



