This week on twitter 2008-12-14

Powered by Twitter Tools.

Core dumps

I was working on qmail (patching it heavily ) and at one point I was getting a segmentation fault. Trying to trace it in a busy environment is hard, especially in qmail that is very modular and processes come and go away all the time.

So I thought a core dump could be the answer.The only problem was my system wouldn't dump the core of the processes where the 'segmentation fault' occurred. I have set the core size file high enough ( ulimit -c 10000000 ) in the qmail start script but still not core dump so I decided to do some research.

From core man page it seems like a core could not be dumped for a number of reasons:

  1. the process owner doesn't have write permission to the directory where the core file should be written.
  2. the core file size limits ( RLIMIT_CORE , RLIMIT_FSIZE ) are less then the size of the core that would be dumped
  3. the process was setuid and in this case dumping the core would depend on the setting in /proc/sys/fs/suid_dumpable ( proc(5) )
  4. the file being executed doesn't have read permission

In my case #2 and #4 were not an issue so it must have been #1 and/or #3 .

To make sure #1 was not an issue I created a directory /tmp/cores, made it 777 to make sure any process can write to it ( not safe in a multiuser environment but works ) and then set /proc/sys/kernel/core_pattern to /tmp/cores/core so now all core files would be in /tmp/cores instead of the process working directory as it is by default. It's also a good idea to make sure /proc/sys/kernel/core_uses_pid is set to 1 so the pid of the process will be appended to the core file name.

To fix #3 I set /proc/sys/fs/suid_dumpable to 2 . I restarted qmail and there I had the core in /tmp/cores .

Twitter threads

The best way to follow a conversation is to see it all in one place.

A twitter conversation is split over several time lines, messages are mixed with other conversations. This makes it hard to follow a conversation especially when you follow a lot of people.

With threading you would be able to see all messages in a conversation in a way that's easy to follow. It's also harder to be misunderstood this way.

Instead of replying to someone reply to their message. Thus twitter would be able to show threaded conversations. Of course twitter clients will need to be modified to support this new reply method.Actually both methods should be available.

There are a lot of people saying twitter is very good just as it is now and are against any new features. Others believe twitter groups are really needed.

For me threaded conversations would be the best feature I would like to see in twitter.

I'd like to see your opinion on this. Would you like threads? Let me know in the comments.

Twitter Updates for 2008-12-05

  • opendialect is such a dissapointment. I regret having to install mono just to try it #
  • Yahoo search: you could go to google, but why not stay here http://tinyurl.com/5vas76 #
  • Good morning everyone! seems like google reader changed it's interface #
  • Address-Book Additions boost Inbox delivery http://www.clickz.com/3631925 #
  • be careful when you buy something from godaddy. you never know what you agree to. #
  • @garyvee seems like the best option is to switch to a different service #
  • paypal email support is the worst. You have to go through at least two forms and you get a response ( if any ) after a week #
  • @steveweber before that one he also said 640k of memory should be enough. That's why later dos programs needed special drivers for more ram #
  • @adriana_iordan wow a ajuns super rapid. Nu credeam. Posta romana rocks 🙂 Enjoy! #
  • @shoemoney just a suggestion on your newsletter format. 3 columns +1 one for ads is really bad for reading. Please use only two for text. #

Powered by Twitter Tools.

Squid 2.5 digest authentication

More then a year ago I wrote a post where I explained how to set up secure digest authentication for Squid proxy server so passwords would not be sent in plain text to the server when authenticating.

That post was written for squid 2.6 but recently I had to set up the same thing on Squid 2.5 and I found out that the setup was a bit different.

Squid 2.5 is really very old and fewer and fewer will be using it in the future as even Squid 2.6 becomes obsolete with the release of Squid 3.0 .
So if you are considering setting up a new proxy server using squid please use Squid 2.6 and take a look at how to set up digest authentication in squid 2.6

The differences are really minor but here there are listed in case I or someone else needs to still set up squid 2.5 with it.

The first difference is in the way you have to specify the "digest program" auth param.

for squid 2.6 it has to be like this :

auth_param digest program /usr/lib/squid/digest_pw_auth  -c /etc/squid/digest_passwd 

but for squid 2.5 it has to be :

auth_param digest program /usr/lib/squid/digest_pw_auth  /etc/squid/digest_passwd 

The second difference is in how the passwords are stored. In Squid 2.6 the passwords are stored securely as an md5 hash but in squid 2.5 they are stored in plain text in this format "username:password" . ( one more reason to make sure /etc/squid/digest_passwd can't be read by anyone other then squid user )

So for squid 2.5 what you gain in security over the network transmission of the password you lose in security at the password storage. This may still be a good deal if your local security is high but there isn't any way you can control the security of the network between you and the proxy server.

managing mysql binary logs

Binary logs is how mysql keeps track of what changed in the databases. While it is recommended (in case you want to recover the database ) and sometimes even required ( if you want to replicate the db ) to keep such logs if you have a server where the database changes frequently those logs will occupy a lot of your disk space.

If you are tempted to just delete ( rm ) some of the old logs files DON'T do it. Or if you do it remember to also update the index file and remove the lines containing the log files you have deleted from it otherwise you will get in trouble, depending on your version mysqld server  might not start next time.

A better way then deleting them directly from the file system is to use the "purge logs" statement to delete all logs prior to a certain log file or prior to a certain date.The only problem with this is that you still have to remember to do this from time to time or set up a cron job to do it or else you will have to do it when mysql dies because it ran out of disk space. Luckily there is an even better solution.

There is a configuration option for mysql server that allows you to specify the number of days you want to keep logs for. Everything older then that number of days will be automatically deleted my the mysql server. The configuration variable is named: expire_logs_days. Something like expire_logs_days=30 will delete all log files older then 30 days

Warning! purge logs and expire_logs_days might not work if you deleted the bin logs files directly from the file system. To make them work  you will have check each line in the .index file. Each line in the .index file contains a bin log file name. If any file mentioned in the .index file doesn't exist on disk you will have to delete that line. Then just restart the mysql server.

One other tip to make the logs use less disk space is to tell the server not to record logs for databases where you don't care about loging ( like databases you only use for development or testing that might still get a lot of updates but you don't want to replicate them or you don't need to recover them if anything breaks ) .Here you can either tell mysql to only keep bin logs about some databases or to ignore others. The binlog-do-db and binlog-ignore-db configuration options will help you with this.

exim and domainkeys on debian

This post if a follow up on one of my previous posts that described how you can create a custom exim package on debian.

In this post I will show you how to compile and configure exim with domainkeys support. The configuration will be only for signing outgoing emails but it's easy to make it verify signed messages if you read the exim DomainKeys documentation

To do this first follow the steps described in my previous post and between steps 7 and 8 do these steps :

  1. install libdomainkeys:
    download from: domainkeys.sourceforge.net , extract and make:

    1.  

    if it doesn't compile with errors about resolv do this:

    1.  

    to install just copy the static lib and the header files:

    cp libdomainkeys.a /usr/local/lib
    cp domainkeys.h dktrace.h  /usr/local/include
    

    and then cleanup :

    1.  
  2. Configure the exim custom package for domainkeys:
    add domainkeys support to exim makefile:

    1.  

    And now continue with step 8 in the previous post

When you're done all that's left to do is edit exim configuration to enable domain keys signing:

open /etc/exim4/exim4.conf  or /etc/exim4/exim4.conf.template  in an editor

look up for the remote_smtp transport definition and add the following configuration to it:

dk_domain = ${lc:${domain:$h_from:}}
dk_selector = default
dk_private_key = /etc/exim4/dk_keys/${dk_domain}_priv.key

Key management

create the directory that will hold the keys :

mkdir /etc/exim4/dk_keys

create the scripts that will generate and show the the keys :

  1.  

generate a key for a new domain:

  1.  

After you set the DNS TXT record you can test the new setup by sending an email from the newly configured domain to an account @ gmail or yahoo . At gmail view the new message and click on "details", it should show up as "signed-by: my_new_domain.tld" , yahoo will just show an icon with a key in the message header.

Recover plesk access

Here's a scenario: you're locked out of plesk admin, you forgot the password and can't recover cause your email address is not set in the contact details.

Still have ssh access as root (ssh keys or can still remember password for root ) ? Most of the time I use dsa keys for ssh authentication.
If you do then you can change the password for admin.

Plesk keeps it's password in the psa mysql database so you just have to change it in the psa.accounts table . But to have access to it you need access as root in mysql.
If you don't have the password for root ( most likely on plesk servers ) you'll have to stop mysql and start it without privilege verification.

  1.  

That would work on most linux distros , on some the stop script would be /etc/init.d/mysqld and on others the path to the mysql server might be /usr/libexec/mysqld .
use psa

Once you're logged in run this sql to change the password:

  1.  

Now get out of the mysql client ( CTRL+C) and restart mysql to have privilege verification back or else everyone would be able to do what you just did:

  1.  

Now you can login to plesk with the new password.

debian: building custom exim packages

This is a small howto that explains how to build custom exim4 packages on debian.

It was tested with both exim 4.63 ( on debian etch ) and exim 4.69 ( on debian testing/lenny ) .

I needed to build a custom exim email server that would be built with domainkeys and/or dkim support for signing outgoing messages.

So here are the 12 steps I took to get this done:

  1. Create a directory named exim where all activity will take place.
  2. Make sure you have the 'source' URIs in your source.list file.
    If you don't have them put them in  and then run apt-get update
  3. Install packages required for creating a custom package and building it:
    1.  
  4. Install exim4 source package:
    1.  
  5. unpack standard configuration files:
    1.  
  6. Define the new package name. In this step we just put the new package name in a variable and export it in the environment to make the next steps easier. You can use anything for the package name ( actually it's just a package name suffix ) but I recommend using 'custom' for the package name for one main reason: dependencies. Packages that depend on exim4-daemon-light or exim4-daemon-heavy (like sa-exim, mailx and maybe others ) already accept exim4-daemon-custom as a replacement so with this custom package you're not breaking any dependencies.
    Ex:

    1.  
  7. Edit configuration files. There should be 3 EDITME configuration files for exim and one for eximon, one for each package that will be built. Copy one of the exim EDITME file to EDITME.exim4-$your_pkg_name then edit the new file to set up the new options you want.
    Ex:

    1.  
  8. pack the configuration files so your new configuration will be saved and used at build time:
    1.  
  9. Create the custom package. This is required only if you use a package name other then 'custom':
    1.  
  10. Activate the new package in debian/rules. Edit debian/rules and look for the line where the extradaemonpackages variable is defined and add your package name ( exim4-daemon-$my_pkg_name ) to the list of packages defined there.
  11. Install build dependencies. You can skip this step if this is not the first time you build this package.
    1.  
  12. Build the packages:
    1.  
  13. Install the new package. if you already had some version of the exim4-daemon package installed you will have to remove it first and then you can install the custom package. The new package will be in the base directory created at step 1.
    Ex. (for amd64 etch exim 4.63-17 ) :

    1.  

This process went pretty well for both exim 4.63 and 4.69 on lenny. Exim 4.63 only had experiemental support for domainkeys ( not dkim ) and exim 4.69 on lenny had support for both but I was only able to build it after applying a small patch to exim to make it work with the latest version of libdkim ( 1.0.19 ) .

This post was intended to be a general howto about building a custom exim package. I will write more details about actually building exim with domainkeys and/or dkim in a future post.