Feeds:
Posts
Comments

California Budget Crisis

Congratulations to Abel Maldonado for helping to break the logjam in California’s (latest) budget crisis. The budget crisis, which now seems to be an annual event, was especially bad this time, both fiscally and politically. The “gap” in the budget amounted to $41bn. The political situation is not much better.

The classic argument seems to be between the Democrats, who want to close the gap by raising taxes, and the Republicans, who want to close the gap by cutting spending. Of course, things are more complicated than this, but that’s essentially it. The problem is that the situation is so politicized — and so polarized — that nobody can find any common ground, and so progress is extremely difficult. The California Senate was locked in its chambers two nights in a row, and only after that was one senator (Maldonado) bribed enough to cross party lines to support the budget.

When I say “bribed” I don’t literally mean that they paid him off. The Democrats made a number of concessions to win his vote. Most notably they agreed to support open primaries, and they agreed to drop the proposed 12c/gallon gas tax from the budget. The Republicans are not likely to look kindly on Maldonado’s move. After all, earlier in they week, they ousted minority leader Dave Cogdill and replaced him with “anti-tax hard-liner” Dennis Hollingsworth. The reason for Cogdill’s ouster? Because he cooperated with the Democrats in putting together the current proposal! So Maldonado is likely to be viewed as a traitor. They’d draw him and quarter him if they could get away with it.

It’s a risk, but of course there’s something in it for Maldonado. Crossing party lines is likely to put him in jeopardy with the Republican party machine. But if there’s an open primary, he can get Democratic voters to vote for him. After all, he’s the hero who helped save the budget, right? Also, he earned a lot of publicity. Who had heard of Maldonado before the budget vote? Probably nobody outside his district. With the press coverage he’s getting now, it would set him up for another run for a statewide office. (He ran unsuccessfully for state controllers in 2006.)

Unfortunately, at least one of the concessions to get his vote is terrible. I don’t really care that much about open primaries, but removing the gas tax (in preference to leaving the sales tax increase) is a mistake. With the current economic situation, raising the general sales tax is exactly the wrong thing to do. If you’re going to raise any tax, the gas tax is the one to raise. When gas was over $4/gallon last year, people really did change their behavior. They drove less, took public transit more, and stopped buying gigantic SUVs. Yes, it’s painful, but high gas prices will help improve air quality and reduce our dependence on foreign oil. So that’s why the California legislature made a move to keep gas cheap. Wonderful.

Unfortunately, it’s quite common for the legislature to do things that don’t make sense. The usual analysis of the oft-recurring budget problems concludes that the root causes of budget problems lie in Proposition 13 (from 1978) and the requirement of having a two-thirds majority in the legislature to pass a budget. I think these are indeed problems, but the analysis kind of misses the point. The real problem is that the legislature doesn’t know how to save money for the future. Remember the idea of saving money for a rainy day? It means, don’t spend it all when the sun is shining. But when the economy is booming and tax revenues are up, the legislature says “Great, we have all this money, let’s spend it on all those pet projects we’ve always wanted to do!” When the bust comes and tax revenues fall, we get a huge budget gap. I am slightly sympathetic to the Republicans when they say we don’t have a revenue problem, we have a spending problem. But I don’t hear them — or anyone — saying that we should save money (i.e., run a budget surplus) when times are good, so that we won’t have a problem when times are bad.

It’s politically difficult to do this, I know. But as Rahm Emanuel said recently, “You never want a serious crisis to go to waste.” What California needs to learn from this crisis is how to save money.

The other day I wrote an entry about using bind/trigger on a local variable and what can go wrong if you do this. But why would somebody want to do such a thing? Isn’t this just an obscure corner of the language with a curious behavior?

It turns out that this example came up in actual code, and it caused us quite a debugging headache.

Take a look at the HttpRequest class. It has a fairly complicated state machine. The current state of the object is visible both through state variables (started, connecting, doneConnect, etc.) and also through a series of callbacks (onDone, onConnecting, onDoneConnecting, etc.). Strictly speaking, these are redundant. An earlier version of this API didn’t have the callback interfaces. I’m not completely sure why, but I think that callbacks (like listeners) were viewed as a Java-like construct, and the designers of the API wanted an interface with more of a JavaFX-Script flavor. This is completely understandable; the shape of an API is intimately intertwined with the mechanisms and constructs available in the language.

In Java, writing a class with public fields is poor style. It allows uncontrolled writes to the field, and there is no way for anyone — neither the client nor the class’s implementation — to detect when such a field has been modified. Instead of exposing a field, you have to provide a getter and setter methods. If you want a client to be notified when your object’s state changes, you have to set up a listener of some sort. It’s very common to for classes to use listeners to notify clients of state changes. This has led to a proliferation of listener interfaces in the class library, which in turn has led to a proliferation of little listener methods in client code. Most listeners are very small. They usually just copy a value or call an update method. If you have a one-line listener, it requires half a dozen lines of inner class boilerplate and a fair number of confusing braces and parentheses. I think this has contributed quite a bit to Java’s reputation as a verbose language.

For example, in Java, if you have a Rectangle rect that you want always to be 50 pixels over and 100 pixels down relative to the location of otherRect, you’d do something like this. (This doesn’t correspond to any actual Java class, but you can see the point.)

otherRect.addStateChangeListener(
    new StateChangeListener() {
        public void stateChanged(Rectangle otherRect) {
            rect.setLocation(otherRect.getX() + 50, otherRect.getY() + 100);
        }
    }
);

By contrast, in JavaFX Script, an object’s variables can be made read-only to the general public using the public-read access modifier. Furthermore, clients can detect changes to another object’s variables by using the bind mechanism. This leads to style of object coupling where objects expose state via publicly-readable variables, and where clients bind to them in order to pick up state changes. Binding works great if your object’s state variables are updated as a function of some other object’s state variables. The rectangle location updating code would look like this in JavaFX Script:

var rect = Rectangle {
    x: bind otherRect.x + 50
    y: bind otherRect.y + 100
}

This is really cool. I think we’d all agree that the JavaFX Script example is much more concise, powerful, readable, and understandable. Great!

The problem is that, while bind works well when updating values as functions of other values, it doesn’t work so well when you want to take action (that is, perform a procedure) upon certain state changes. Let’s imagine that the HttpRequest object had no onInput callback (as was the case in the past). When the request body becomes available, the input field of the HttpRequest object is set to an InputStream from which the data can be read. In this style of API, instead of callbacks, clients of the HttpRequest class are expected to use bind to detect state changes. Let’s try to write some code that does this.

We want to bind to the request’s input field… but bind can only appear as the initializer of a variable declaration, or as the initializer within an object literal. So we’ll have to cook up another variable upon which to hang the bind:

var req = HttpRequest { ... };
var xyz = bind req.input;

This doesn’t do us much good; all it does is update the xyz variable when req.input changes from null to a valid InputStream. Recall that a bind expression causes re-evaluation of the portions of an expression that are affected by a change to a bound value, including function calls if a bound value is a parameter to a function. So we could try something like this:

var req = HttpRequest { ... };
var xyz = bind handleInput(req.input);

function handleInput(is: InputStream) {
    ...
}

This doesn’t really work, however. We have to return a value from the handleInput() function, and this has to match the type of the xyz variable. But this value is essentially unused and is merely a distraction:

var req = HttpRequest { ... };
var xyz: Boolean = bind handleInput(req.input);

function handleInput(is: InputStream): Boolean {
    ...
    return true;
}

You can leave off the Boolean type declarations on xyz and for the return value of handleInput(), because the compiler will infer the proper type. Still, it’s a bit clunky that you have to declare a useless variable and return a useless value from the handleInput() function.

Isn’t there a better way? There sure is. JavaFX Script has a trigger mechanism (which is spelled on replace) that allows some arbitrary code to be executed when a variable’s value changes. If we were to use a trigger, it would look something like this:

var req = HttpRequest { ... };
var input = bind req.input on replace {
    // read from input here
};

This is quite a bit better. We don’t have to cook up a function with a new name, and we don’t have to declare a useless variable and return a useless value from our function. The on-replace code is tied directly to the new variable, the one that’s bound to the variable we’re interested in. This is pretty concise and powerful.

I’m starting to see this idiom pop up in a lot of code. It’s useful under the following circumstances: a) you want to write code that’s triggered on a readable variable in another object, and b) that other object doesn’t provide a callback function or listener. Ideally, in some sense, you’d want to install a trigger on the variable in the other object. But you can’t do that: you can only install a trigger at the declaration of a new variable. So, you have to declare a new variable of your own, use bind to copy the value of the other object’s variable, and use an on replace trigger to have your code run when the other object’s variable is updated.

This technique reminds me of the Introduce Foreign Method refactoring, where you can’t add a method to another class, so you add it outside and treat it idiomatically as if it were a new method on that class. I’ll therefore call this technique the foreign trigger idiom.

This is all sort-of moot now, since my example is based on the older version of the HttpRequest API that didn’t have callbacks. As of 1.0, the HttpRequest has callbacks, so instead of using a foreign trigger you’d just supply a function as the value of the onInput variable of HttpRequest. But there’s still a need for foreign triggers in other parts of the API. Consider the Image class. This class allows images to be loaded in the background, by setting the backgroundLoading variable. How can you tell when the image is done loading? There’s no callback function, but the progress variable is updated and reaches 100 when the image is finished loading. So you could do something like this:

var img = Image {
    backgroundLoading: true
    ...
};
var progress = bind img.progress on replace {
    if (progress == 100) {
        // take action now that img is done loading
    }
};

You can see this idiom in use in various JavaFX samples, such as the one here.

All well and good. But what does this have to do with the stuff I was talking about earlier, regarding the lifetime of local variables?

If you’re writing a simple script, you typically tend to declare your variables at top level. These variables live as long as your script is running, and objects they refer to aren’t garbage collected for the lifetime of your application. So the foreign trigger idiom works perfectly well for these cases.

Now suppose you’re writing a program where HttpRequest operations are performed repeatedly. For example, you might want to fetch all the photos in a particular photo set, or you might want to fetch all the calendar entries for each day of the month. Clearly, you don’t want to declare separate variables for each of these requests. You’d want to wrap things up in a function, and have this function called repeatedly as often as necessary. The code would look something like this:

function getEntryForDate(date: Date) {
    var req = HttpRequest { ... };
    var input = bind req.input on replace {
        // process input and convert to a calendar entry
    }
}

BANG! Can you see the bug? If not, look again!

The problem is that the trigger was declared on a local variable, and this local variable is subject to garbage collection. This code sometimes works and sometimes doesn’t work. In fact, this is the most insidious kind of bug. You can take the code and isolate it into its own script (using script-level variables) and it will work perfectly. If you use the function as-is and call it from a simple test program, it will almost always work. That’s because in a simple test program, not much else is going on, and GC probably won’t occur. But put this into a big application, and call it 30 times to get all the appointments for a month. GC happens, and suddenly and randomly your triggers stop firing.

The consequences are fairly dire for HttpRequest, since it requires the InputStream to be closed to indicate that processing of the request has been completed. This processing is usually handled by a trigger, but if the trigger is GC’d, processing of the request never completes. The HttpRequest implementation has a limit on the number of outstanding requests. Eventually the pending request limit will be reached, no new requests will be issued, and the system will grind to a halt. We tore our hair out for about a week until we figured out what was going on.

Since the foreign trigger idiom is so common, and since it’s so easy and dangerous to use it on local variables, I’ve filed a bug (JFXC-2168) on this problem. The solution isn’t obvious. There’s some discussion about potential solutions in the bug report.

Moreover, the problem isn’t confined to local variables. If you use the foreign trigger idiom on an object’s variables, you have to make sure the object itself doesn’t get garbage collected. If you don’t keep a reference around to the object in question, it’s liable to get collected itself along with your trigger, and you end up with exactly the same problem. More on that later.

Subprime Technical Debt

The concept of “Technical Debt” was first introduced by Ward Cunningham in an OOPSLA 1992 experience report. Martin Fowler has a pretty concise definition of technical debt. Steve McConnell expanded the concept and even created a taxonomy of different kinds of technical debt.

Just what is technical debt? I recommend reading the above-linked articles, but here’s a brief definition if you don’t want to click away. Technical debt is speeding things up now by taking a shortcut, even though it’ll slow you down later. The metaphor holds up pretty well. Technical debt tends to grow over time if you don’t keep an eye on it. A system with a lot of technical debt is harder to work on, and therefore progress is slower, than one without; this is like paying interest. And so forth.

Given that the current recession was caused by the collapse of the subprime mortgage industry, is there a way to extend the metaphor of technical debt to include subprime debt? I think so. Subprime mortgages were those issued to people who couldn’t afford them, or who were uninformed about how the debt instrument actually works. (You mean, I can’t pay interest-only forever?) The analogy isn’t exact, but I think these are comparable to two broad classes of debt that I’ve seen on software development projects. The first class are created by managers who take on excessive debt irresponsibly, attempting to achieve short-term goals at the cost of making long-term goals almost impossible to achieve. The second class occurs when engineers take on problems that are difficult or intractable without realizing it, even though this might already be known in the industry.

As an example of management irresponsibility, consider a project that is approaching its “feature complete” or “code complete” milestone. The idea is that coding of all the new features is supposed to be completed and checked in, though it’s acknowledged that it will have bugs, and there is room later in the schedule for bugfixing. (I actually think the idea of having a schedule with a “feature complete” milestone followed by bugfixing is completely bogus, but that’s a topic for another article. Many projects do seem to be run this way, however.) Inevitably, time to implement the features starts to run short, and managers will attempt to remain “on schedule” by telling developers to write code more quickly, that they don’t need to worry about bugs (because we’ll have time to fix them later), that they can just implement a skeleton of the feature initially (and implement the corner cases as bugfixes later), and so forth. Taken to the extreme, no code for feature X is actually integrated, but a bug “feature X doesn’t work” is filed, which allows the project to reach feature complete “on time,” and which allows the feature to be developed later under the guise of bugfixing.

Of course, this is completely silly way to run a project. Any bugfixing time in the schedule is intended for fixing unknown, unforeseen bugs. Using this time to work on “known bugs” (really, incomplete features) and even on outright feature implementation leaves too little time to work on the unknown bugs, and the bugs arising from features being implemented late and in a hurry. In reality, the schedule has already slipped, but the managers haven’t realized it yet. Or worse, they’ve realized it, but they aren’t willing to admit it. This is reckless and irresponsible. It happens all the time.

As an example of engineer naïveté, let me refer you to an article from Jeff Atwood’s Coding Horror blog on cross-site scripting. Basically, if you let users input arbitrary HTML and you include it directly on your page, they can use this to take over your site! Oh, well, to avoid this you just have to sanitize the HTML before including it. Should be a few regexps, right? Wrong. To do this properly pretty much requires writing a full-blown HTML scanner and parser that mimics the behavior of the scanner and parser in the actual web browser. If you don’t get this exactly right, it means that somebody will be able to find some fragment of HTML that gets past your sanitizer but does malicious things when processed by the browser. Dealing with SQL injection attacks is similar. If you approach these problems thinking you’ll use pattern-matching instead of parsing, you condemn yourself to a never-ending series of hacks, patches, workarounds, and kludges that will make the system unmaintainable and yet doesn’t really solve the problem. Some people might consider this job security, but what a crappy job.

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.
— Jamie Zawinski

Whether through management irresponsibility or engineering naïveté, these cases create an unsustainable amount of technical debt. The result is, if not outright failure, a death march.

(Hat tip to Robert McIlree for an article comparing subprime mortgages to technical debt, which didn’t really inspire this article, but which does predate it.)

In my previous entry, I mentioned that binds and triggers could extend the lifetime of local variables. This is incorrect. I think what’s really happening is that local variables aren’t destroyed immediately when their defining scope exits; they can continue to live for an arbitrary period of time. In a language like Java, you cannot observe anything about a local variable after its scope has exited, since there’s no way to name it or to create a reference to it. So, you can’t tell whether the variable has been destroyed immediately or whether it sticks around. However, in JavaFX Script, you can observe a local variable after its scope has exited. Consider the following example.

var v = 0;
function f(p: Integer):Void {
    var localvar = bind p + v on replace old {
        println("localvar: {old} => {localvar}");
    }
}
f(17);
f(32);
println(">>> increment v");
v++;
println(">>> increment v again");
v++;
println(">>> done!");

If you compile and run this program, the output is:

localvar: 0 => 17
localvar: 0 => 32
>>> increment v
localvar: 17 => 18
localvar: 32 => 33
>>> increment v again
localvar: 18 => 19
localvar: 33 => 34
>>> done!

(Note that the trigger fires the first time, when the variable is initialized from its default value of zero to the value of the expression in its initializer.)

Like the previous example, this shows that there are two distinct local variables named localvar that have been created by the two calls to function f. Normally after f returns, there’s no way to observe localvar. However, since it’s been initialized to a bind-expression that uses an external variable (v in this case), we can change its value by manipulating that external variable. Furthermore, we can observe changes to the value by attaching a trigger (“on replace”) expression that has the side effect of printing a message. Pretty cool, eh?

Well, maybe. The problem is that although we can observe localvar by placing a trigger on it, there is nothing external that references it. This seems pretty fragile, since things that don’t have references to them are subject to being garbage collected. Let’s test this by allocating a bunch of memory to force GC. Just before “increment v again” insert the following code:

var seq: String[];
for (i in [1..50000]) {
    insert "{i}" into seq;
}

(Your mileage may vary. On my system, a loop of 50,000 causes GC every time.) If you run the program again, the output is as follows:

localvar: 0 => 17
localvar: 0 => 32
>>> increment v
localvar: 17 => 18
localvar: 32 => 33
>>> increment v again
>>> done!

What just happened? This is quite odd. In Java, the only way to observe anything about an object is to have a reference to it, and having a reference will prevent it from being collected. In JavaFX Script, we can place a trigger on a local variable in order to observe changes to its value, and we can change its value by virtue of having initialized it with a bind-expression. But we don’t actually have any references to it, so the variable, the bind-expression, and the trigger are all subject to garbage collection!

This is admittedly a pretty obscure corner of the language. Why would anybody want to put a bind and a trigger on a local variable? In my next blog post, I’ll explain why this construct has come up repeatedly in real programs, how the GC issue has caused problems, and what to do about it.

UPDATE: followup post is here.

By “extent” I mean, how long do local variables exist?

In languages such as C, C++, and Java, local variables are destroyed as soon as you leave the scope in which they’re declared. If a function or method has a local variable, each time you call it, you get a different variable with the same name each time. In C and C++ you can try to do stuff like reusing the variable without initializing it, or returning its address, but the results of doing so are undefined and give incorrect programs.

C++ provides for somewhat stronger behavior for local variables that are objects, in that the object’s constructor is called when the scope is entered and the destructor is called when the scope is exited.

Java has similar rules, but the abstraction is much stronger. You can’t take the address of a local variable, nor can you attempt to use a local variable before initializing it — it’s a compiler error. You can try to “hang on” to a local variable by using it within an anonymous class, but the language requires that such locals be declared final. (I’m not entirely sure why; I think it allows the implementation to copy the value somewhere else so that all the locals in the method scope can be destroyed.)

In JavaFX, local variables can hang around for an arbitrary length of time. One way of doing this is by creating an inner function that references the outer function’s local variable. This creates a “closure,” that is, a closed environment that contains the local variables that are in scope at the time of the inner function’s creation. JavaFX has had closures for quite some time; Jim Weaver wrote a nice article about this over a year ago. Here’s a denser example:

function f(p: Integer): function(): Integer {
    var localvar = p;
    function(): Integer {
        ++localvar;
    }
}

What the heck does this do? First, f is a function that takes an integer and returns a function-that-returns-an-integer. This inner function (which has no name) increments the local variable and returns its new value. (Note that I had to copy the parameter into a local, since in JavaFX function parameters cannot be modified.) Now let’s call f a couple times, and then call each of the returned functions a couple times.

var g = f(17);
var h = f(32);

println("g() => {g()}");
println("h() => {h()}");
println("g() => {g()}");
println("h() => {h()}");

The output is:

g() => 18
h() => 33
g() => 19
h() => 34

(Astute readers will find this discussion reminiscent of the first chapter of Abelson & Sussman.) What’s going on here is that the first call to f created a local variable and a function that captured it, and returned this new function. This function was stored in g. The second call to f created a different local variable and a different function and returned it, and this was stored in h.

It’s not clear whether the functions “really” are different. The compiler might generate and use the same code for them, but they definitely use different environments. If you compare g and h you’ll find that they are not equal. Indeed, calling them gives different results. As you can see from the output, they clearly contain different instances of the local variable localvar. What’s more, these two instances of localvar exist long after f has returned. And they’ll continue to exist for as long as you hold onto g and h.

But inner functions aren’t the only way that local variables can continue to exist after their enclosing function exits. The bind and trigger (“on replace”) language constructs can also extend the lifetime of local variables. More on that in my next post.

Update: next post is here.

Victory and Defeat

It’s now been a week since Obama’s inauguration, more than enough time for the pundits to express their opinions about Obama’s inauguration speech. Commentary has been received from the usual suspects:

Like many of the critics, I thought Obama’s speech was good, but not great. It was neither soaringly inspiring nor overly alarmist. It struck a middle tone of cautious optimism for the future while warning of the amount of work and sacrifice that will be required. It seemed calculated to please liberals and conservatives alike. It did so by using careful phrasing that allowed the critics to project their own interpretations onto the speech. Let me do the same.

The phrase that was surely crafted to please conservative listeners was “And for those who seek to advance their aims by inducing terror and slaughtering innocents … You cannot outlast us, and we will defeat you.” Of course: Obama will be tough on terrorism! In this day and age, who would not be tough on terrorism? Who would hesitate to proclaim being tough on terrorism, if such hesitation could be interpreted as being soft on terrorism?

What does it mean to be tough on terrorism? Of course, it means that we will bomb Iran! That’s the only way to be tough on terror, right? There’s a group of people who think that the only way to defeat our enemies is to bomb them (or blow them up, or shoot them, or whatever). Probably a bunch of Republicans do. Maybe McCain does (though his “Bomb Iran” line was possibly a joke). Certainly the questioner from the audience described in the above-linked article believes in bombing. The Iranians are evil, so we have to drop bombs on them. End of story.

OK, so we drop some bombs on Iran. Now what? Now that we’ve done so, are they going to say “Sorry about that, we didn’t mean it, let’s be friends”? Of course not. They’ll be bigger enemies, and a bunch of people sympathetic to Iran will also become our enemies. Oh great, more enemies. Should we bomb them too?

Trust me, this doesn’t stop. Remember when Yugoslavia broke up after the fall of communism? Some of the tribal warfare that broke out had roots going back 800 years. Eight. Hundred. Years. So if you want to bomb Iran and create some enemies who will stick around for the next few centuries, be my guest.

So, how do we achieve victory if not through guns and bombs? A clue about how Obama’s policies will take us forward can be found just a few sentences earlier in his speech:

Recall that earlier generations faced down fascism and communism not just with missiles and tanks, but with the sturdy alliances and enduring convictions.

They understood that our power alone cannot protect us, nor does it entitle us to do as we please. Instead, they knew that our power grows through its prudent use. Our security emanates from the justness of our cause; the force of our example; the tempering qualities of humility and restraint.

This is, I think, amazing speechwriting. It advocates of “soft power” which should please the liberals, while pleasing the conservatives by tying it to Reagan’s defeat of communism and the Greatest Generation’s victory in WWII. Simply astounding.

Given this context, what is victory? Here’s my definition:

You have achieved victory when you have convinced your adversary to change his behavior in your favor.

Not easy or straightforward. But more likely to secure peace than bombing the other guy.

In the past few months, I’ve read most of Weinberg’s Quality Software Management series. There are four books in this series:

1. Systems Thinking
2. First Order Measurement
3. Congruent Action
4. Anticipating Change

I first heard of this series several years ago, and I had a couple reactions. The first was to be intimidated. Wow, four volumes about software quality. I bet they’re full of charts and graphs and statistics, because that’s what it takes to make quality software, right? And I bet they’re deathly dull, too. (Boehm’s Software Engineering Economics is like this.) As it turns out, Weinberg’s series is far from intimidating and is in fact quite accessible. It’s mostly about people and teams and how they interact. Software is almost incidental. There are graphs and diagrams, but they’re all pretty qualitative. That is, they don’t show numbers. They show trends and relationships. For example, as the complexity of a problem increases, the effort required to solve the problem increases exponentially, not linearly. It doesn’t matter what the exact numbers are; it’s the shape of the curve that counts.

My second reaction was about the ambiguity in the title. Are these books about management techniques that lead to high-quality software, or about high-quality management in software projects? Weinberg addresses this issue at the very beginning of volume 1. The answer is: both. In order to produce high quality software, the quality of management must be improved. The way this is done is to consider software projects as systems of people, which leads into the heart of the subject matter of the first volume.

I stalled out reading volume 4. What I read of it is very good, and it’s a logical continuation of the preceding volumes. But it’s too advanced for me and my projects. We’re still stuck working on stuff that’s covered in the first couple volumes. We’re set up in a particular way, and stuff manages to get done after a fashion. But I don’t actually believe that people consciously understand how their actions affect the system. Basically they’re reacting to situations as they come up, and dare I say, their reactions are often not congruent.

I’m a fan of Distributed Version Control Systems (DVCSs). I think the first DVCS was Teamware. After using Teamware effectively for many years, I ran across a bunch of people who thought that Subversion was the greatest thing since sliced bread. I’ve written about this before. I think it’s just because they don’t know any better; they never used a DVCS, so they don’t know what they’re missing when they use Subversion.

I use Mercurial, but I also hear a lot of good things about git though I don’t use it. Even though Mercurial and git are rivals in some sense, I think of Mercurial and git as allies in the struggle against centralized VCSs.

I recently read an article DVCS adoption is soaring among open source projects. I was amused when it referred to Subversion and CVS as “legacy” systems. Heh heh heh.

Welcome to my new blog. Why a new blog, given that I have two others that I’m not using?

I wasn’t using those other blogs, not because I didn’t have time to blog or ideas to blog about, but instead because I felt inhibited blogging on them.

It’s easy to explain why I wasn’t using the java.net blog. I created it for the phoneME project, part of the Mobile & Embedded community that Sun created on java.net around open source Java ME. That blog was tied to that project and that community. I haven’t been involved in that community for nearly two years, so it didn’t seem sensible for me to use it.

The S Marks The Spot blog is a bit harder to explain. I’m a Sun employee, I have been for a long time, and I’m not intending to leave anytime soon. Sun lets its employees create blogs on blogs.sun.com, but not everything there has to be Sun-related. In fact lots of people post lots of non-Sun things on their Sun blogs. Sun has a fairly liberal blogging policy that even encourages this. Furthermore, after a blogger leaves Sun, their material normally is preserved for viewing (though they can’t post anymore). I think it’s great that Sun has this policy. So why don’t I take advantage of it?

I did for a while. But after a while I noticed that I’d have ideas but I wasn’t motivated to write them up. Sometimes I’d even write up entries but not post them. I had built up a lot of internal resistance to posting there.

It took a while, but I finally figured out that the problem was the ambiguity inherent in having a “personal” blog on a corporation’s website. Is my Sun blog about me, or is it really a “corporate” blog written and edited by me? This ambiguity is reflected in how I describe it: “my Sun blog.” Is it my blog or Sun’s blog? The ambiguity is also reflected in the flame-wars that have popped up several times on Sun’s internal bloggers mailing list. The argument is between those who believe blogs must be personal and authentic, and those who believe that blogs are a tool of marketing and communication and should be used to their fullest advantage. The first group of people think that blogs in the second group are somehow invalid.
I actually think corporate blogs are fine. After all, I don’t expect Jonathan’s blog or our corporate counsel’s blog to do much other than represent the company position. But I didn’t want a blog like that. Worse, I didn’t want to create a personal blog on sun.com and have to try to convince anybody that it wasn’t a corporate blog.

And, in case it hadn’t occurred to you, I should mention that Sun has announced, but not yet executed, layoffs of up to 18% of its workforce. Whether or not I survive this round of layoffs, it’s a reminder that I won’t be at Sun forever: yet another reason not to invest in a Sun blog.

So here we are. Maybe this shouldn’t be viewed as a new blog, but as a continuation of my old blog. I hope for it to become everything that I wanted my old blog to be, but never became. Welcome.

I installed Windows on my Mac.

I know, not such a big deal. I’ve installed Windows a bunch on actual PCs, and I’ve even installed it a few times on my Macs under Parallels virtualization. But I just used BootCamp to install Windows XP directly on my Mac Pro. So what’s the difference?

Under virtualization, Windows lived in its own padded environment. It (usually) was in its own window. I could stop and start it and start it at will, but the Mac was always there. Running directly on BootCamp… there’s something about it… Windows is actually touching the Mac. There’s just something profoundly wrong with pressing the Power button, hearing the classic Mac startup chime, and then seeing the Windows logo come up.

I have to say the installation was probably one of the smoothest ever. This is partly because the Mac Pro is a very capable platform. Probably Windows would install pretty smoothly on most modern PCs. But it’s also because I had the foresight to download and burn XP SP 3 to CD before starting. This enabled me to skip several time-consuming Windows update steps.

The shortened sequence was as follows: boot Windows XP SP2 from CD; go through initial install screens and start install; reboot to XP from the hard disk; install Apple BootCamp 2.0 drivers (reboot); install Apple BootCamp 2.1 drivers (reboot); install Windows XP SP 3 update (reboot); update Windows Update; download and install 28 Windows updates via Windows Update (reboot). Overall I think this took only about half an hour. By pre-downloading SP3 I avoided downloading (I think) 47 Windows updates, after which it offers to download SP3. Amazingly, there was only one hitch, which occurred when I switched my USB keyboard/mouse away from the XP installer to work on my laptop for a while. After switching back, XP either refused to see the keyboard (or it hung, I couldn’t tell which). So I had to reboot. So I think I was able to do the install with only about six reboots.

So, why?

No good reason, but a couple mediocre ones. Running Windows under virtualization, if something goes wrong, you can never quite tell if it’s a bug in the program, a bug in Windows, or a bug in virtualization. Sometimes multimedia and gaming just doesn’t work. I wanted to eliminate that problem. The other reason was that probably a year ago I bought Grand Theft Auto: San Andreas for PC (yes, I’m an entire game behind), and I didn’t have any PC hardware that was powerful enough to run it. It didn’t seem worth it to buy a new PC just for a game (plus I don’t have space for it). And I had a disk in the Mac with enough space for a BootCamp partition. So the solution suggested itself.

For this, I’ve sullied my Mac. I haven’t even installed GTA:SA yet. We’ll save that for another time.