Lately I've gotten a lot back into Perl-programming,
and it has honestly been an absolute delight.
Perl was my first proper programming language. As a 17 year old I picked up the
llama book, Learning Perl, the
third edition, covering Perl 5.6. The choice of language was largely made
because it was used extensively in Mandrake Linux, that I was contributing to
at the time. Learning Perl was an excellent introduction, getting me quickly
up-and-running, and pretty much sealing Perl as my go to programming language.
My first long-term programming job was also Perl.
As time went on and I was no longer programming for a living, I drifted towards
other languages. I never really left Perl, since I was maintaining several
programs written in it, and it was still my go-to for one-off scripts, but
for larger projects I drifted along with a lot of the programming community
towards, among manyothers, JavaScript.
The past months NPM-shenanigans made me long for the days where I could, for
the most part, install my dependencies through my distro package manager, and
not pull the latest and greatest (and backdoored) version from a third party
package registry.
So, since I wanted to get away from the node-ecosystem, and my static site
generator was written in JS, I thought I would try my hand at writing one in
Perl. This is the most fun I've had writing code for years. It's just so
nice to write.
Sure, part of it is that I'm very used to writing Perl. But a lot really comes
down to the various niceties that Perl offers. strict mode makes it harder to
make simple mistakes, like mistyping a variable, as it will refuse to run if
you have done so. state variables, which lets subroutines maintain state
between invocations without using global variables, and without having to pass
a state object around, are great. I miss one or both of these features when
writing in other languages.
Perl has, of course, evolved quite a bit, in
recent years. With proper subroutine signatures, and builtin try/catch probably
being the most prominent features. A new object-system is available as an
experimental feature, but in the meantime we've got
Moo, providing a very nice syntax for writing
classes.
Perl is also fast, not only at runtime, but also at compile time. The
latter is really important for writing client-side programs, where any startup
delay will be immediately visible to the user.
Writing Perl is fun. My SSG, BoringSSG, is coming along
nicely. Even though it's sort of old-school, it feels fresh and very
comfortable to be able to get my dependencies through my regular package
manager.
It's not the same language I learned from the llama many years ago. It has
improved immensely, while never losing any of the things that made it great in
the first place. If you've overlooked Perl, I recommend giving it another look.
Or perhaps another language that you enjoyed a long time ago. You might feel
some renewed joy in coding too.
Just released wwine version 0.3. wwine is wine wrapper that wraps wine,
Crossover etc. into a single unified interface, with full bottle support. This
version contains several improvements compared to 0.2. It adds additional
information to the output of the --kill and --drykill commands. These now lists
the bottle and wine flavour the process belongs to when used under Linux. It's
also now possible to combine --kill or --drykill with --wine and/or --bottle,
allowing targeted kills of only a single bottle instead of all running
processes.
This version also introduces support for wine packages installed via
PlayOnLinux. PlayOnLinux supplies many prebuilt
wine-packages, easing wine installation and allowing several to be installed
side-by-side. wwine reads from the list of installed PlayOnLinux wine packages
and adds those to the list of available --wine versions. It also adds support
for using PlayOnLinux bottles, so wwine may be used on games and software
installed through PlayOnLinux. Users may now use wwine --list to list
available wine versions.
Lastly several minor tweaks were made. Including improved support for GameTree
Linux, the addition of --cxg-installdir (which compliments the existing
--cxinstalldir). A crash that could occur when --wrap was combined with --wine wine without also supplying a --bottle was fixed, and the deprecated parameter
--env-bottle was removed.
It is available for download from the wwine website.
I've just released gpgpwd.
It is a simple password manager for people that live on the command-line. It
stores a list of passwords in a GnuPG encrypted file, which it then provides
an interface for retrieving, adding, changing and removing entries from.
The basics are simple: gpgpwd set X sets the entry for X in the file. The
password is not accepted on the command-line, but will be requested
interactively to avoid it showing up in ps. gpgpwd will provide you with
a randomly generated password that you can use, or you can provide your own.
gpgpwd get X retrieves the entry for X (and copies it to the X11 clipboard
unless --no-xclip is supplied) - this command takes a regex, so you don't
have to type out the entire name each time. Finally gpgpwd remove X removes
the entry for X.
In addition to these basic commands it also has a few extra features to make
life a bit more convenient, the main one being git support. You can tell
gpgpwd that your password file is inside git, which will make gpgpwd git pull
before reading the file, and git commit+git push after writing to it,
allowing you to easily synchronize your password between several computers.
It also has a command that will let you batch add passwords from a file, if
you're importing passwords you already have stored.
You can grab it from
its website, where you can
also read the full manpage. If you give it
a go and have some feedback, I'd love to hear from you - either in the comments
here, via e-mail, IRC or through the github issue tracker for gpgpwd.
A few months ago I switched to GNOME 3. After an initial period where some
of it felt rather awkward I settled in, and can now safely say that I don't
want to return to a more «conventional» desktop environment. gnome-shell is
very convenient and efficient to use.
That being said, GNOME 3 has one major issue: it uses pulseaudio.
While pulseaudio may bring many useful features to the table, it does so at the
price of breaking sound at random intervals for no good reason. The largest
design flaw is, imho, that it has decided that it's better at everything, and
thus everything should use it - and so it redirects output to alsa's default
devices to itself. If it was 1:1, 100% compatible with ALSA (yes, bug for bug
if need be), that wouldn't be a problem - but it isn't, and thus it breaks
stuff. The gain for me personally from using pulseaudio is nowhere to be seen,
so I don't see any point in keeping it around.
Since GNOME 3 uses pulseaudio, though, I can't easily remove it. That would
break ie. GNOME's volume control. As a quick hacky workaround I edited
/usr/share/alsa/pulse-alsa.conf - and by «edited» I mean «replaced with an
empty file». This should, in theory, make ALSA's default output go to your
soundcard again, instead of to pulseaudio. The file is not a real config file,
though, so it gets replaced on each update of its package. A simple echo > /usr/share/alsa/pulse-alsa.conf in /etc/rc.local is enough to keep that
monster at bay though (for the most part).
Sometimes the above hack fails to make stuff work again though, so an even more
blunt «HULK SMASH!» method is needed, namely: killing the pulseaudio process.
Pulseaudio is a tricky beast, though, and keeps respawning itself. Luckily,
it's easy to tell it not to do that. Edit ~/.pulse/client.conf and add
autospawn=no - that should make it stay dead after you kill it.
I wish GNOME would reconsider its use of pulseaudio. At least for me, it's not
just worth the extra work, I would much rather have my sound just work. Even
if pulse is not to blame for all of my sound issues, killing it usually
resolves them.
I've recently given git-annex a try. It is based around the initially mind
boggling concept of tracking files in git without checking them into git. In
reality it's not so strange. Git is great at tracking text files, and smaller
binary files. But for many large files, it becomes inefficient, both because of
its storage format and because it ends up storing two copies of each file. What
git-annex does is keep track of various metadata in a separate branch, and just
place symlinks to the real data in the main directory.
This opens up a bunch of possibilities. Ie. given that the files are outside of
git, you can check out the entire tree without having the files there, and then
you can tell git-annex to get the files. I already use git for most of my
files, but for images, larger documents, music etc. it's not really feasable.
With git-annex though, it's no longer a problem. I simply check the files
that make sense into git, and the rest into git-annex (and having both kinds
of checkins in a single repository is no problem). I now use git-annex to
track my music collection, mail attachments (syncing from my server to
desktops) and images.
I've just released wwine 0.2. It adds support for the new Crossover 11.0
release, which changes some of the paths and needs a bit of additional magic to
use. It also has some improvements to the wrapper scripts it generates,
primarily through a new and more robust metadata header.
The most interesting new feature however, is the addition of the \--env and
\--tricks parameters. \--env causes wwine to set the WINE and WINEPREFIX
variables to the syntax used by vanilla wine, this allows various wine scripts
that use those to be able to run using wwine's bottles, as well as with
crossover. The most interesting use of this is the ability to use winetricks
with Crossover. This effectively lets you use Crossover as any other wine
release, while still using Crossover's bottles. So if you have winetricks and
Crossover installed, you need only run wwine -w cx -b BOTTLE --tricks ACTION to use winetricks with Crossover.
Last week I released version 0.1.1 of jQsimple-class. The main addition in this version is support for using jQsimple-class in CommonJS environments, such as node.js. All one has to do is place the CommonJS build of jqsimple-class somewhere in the include path and then do:
var jClass = require('jqsimple-class').jClass;
From there on out the API is the same as the browser one. The CommonJS build also has the same testsuite as the normal build, and passes all of the tests.
Outside of that, the standalone build has been stripped down to the bare necessities, shrinking the minified standalone build of jQsimple-class from 10KiB to 3KiB (the version that uses jQuery is just 1.5KiB).
Today I've released the first version of jQsimple-class, a small JavaScript class-declaration library. The reason I wrote it is that the usual way of building classes in JavaScript is frankly quite ugly, and inheritance is equally ugly. With jQsimple-class I've tried to make it as simple and intuitive as possible to write classes in JavaScript. The library itself is very small, and the syntax is simple. It's meant to let you quickly declare a class, and easily extend others, and then get completely out of your way. It only exports a single variable/function, named jClass (Class is a reserved word in JavaScript, so I went for the next best thing). Using jClass, and methods on it, it is possible to build classes, virtual classes and extend classes.
jClass() takes a single parameter, a JavaScript hash, where keys are method or attribute names, and the values are any valid JavaScript type. jClass.extend() lets you build a class that extends one or more existing classes. jClass.virtual() lets you construct a "virtual" class. That is to say, a class that can not be instantiated, but that can be extended by others.
Internally jQsimple-class uses some jQuery methods, but it does not depend upon jQuery to be used, a standalone version that bundles the parts it needs (not all of jQuery, and without exposing them to the public namespace) is available for applications that do not use jQuery. I have written an extensive testsuite for jQsimple-class to make sure that things work as they should, and it works across all modern browsers.
For more examples and the full API, see the jQsimple-class documentation. jQsimple-class version 0.1 is available for download now. Minified it is only 1.5K (or 9K for the standalone version). Any feedback is welcome, feel free to do so in the comments, or, if you find a bug, on the bugtracker.
It uses Term::ReadLine, which gives a simple session history if you have a Term::ReadLine::* implementation that supports it. It will also use Data::Dumper so that you can quickly see any data structures, you can always use scalar(STATEMENT) if the return value differs in list and scalar context.
Here's an alias that can be shoved into .bashrc :
alias 'perl-repl'='perl -MData::Dumper -MTerm::ReadLine -e '\''$r = Term::ReadLine->new(1);while(defined($_ = $r->readline("code: "))){$ret=Dumper(eval($_));$err=$@;if($err ne ""){print $err;}else{print $ret;}}'\'''
About two weeks back I released SWEC version 0.4. The largest new thing in this release is an updated file format for writing test definitions. Thew new format is a lot more flexible, and will also allow me to extend its syntax with more capabilities more easily later on. It can still read the old file format, and I'll keep the compat code in there until SWEC 0.6 - so people have time to update their files (only minor changes are needed to update them to the new file format, should only take a couple of minutes).
Other than that I extended the command-line parser, so you can now say "swec example.com -s /test.html" where you would previously have had to do "swec --baseurl example.com -s /test.html". Other than that it's mostly a bunch of cleanups, some refactoring and a few minor bugfixes, in addition to a new test suite so the thing can be properly sanity-checked before release.
If you need to sanity check dynamic websites, give SWEC a go.