[{"content":"I started experimenting with Tailscale (along with the self-hosted coordination server Headscale) and I like it pretty much. One of the interesting properties of Tailscale is the separation of control and data plane, where it tries to establish a direct point-to-point WireGuard tunnel between peers. It gracefully falls back to relay servers if such a connection is not possible. This avoids a central VPN server that needs to be involved in every connection.\nOne can use the command tailscale status to find out if a direct connection between peers is used:\n100.64.0.1 host1 net1 linux active; relay \u0026#34;lhr\u0026#34;, tx 33036 rx 27232 100.64.0.2 host2 net2 linux active; direct 192.168.1.2:41641, tx 13892 rx 10024 The connection to host1 is relayed via a DERP server and the connection to host2 is direct where the WireGuard tunnel uses 192.168.1.2 as the outer IP address.\nOne day, I noticed something odd: direct connections between two peers in my local network are only possible if one of them uses WLAN. As soon as both peers are connected to the same switch, packets are sent via a relay. The peers are all on the same LAN and there are no weird firewall rules that block traffic. So it clearly should use a direct connection.\nThe solution is buried in the config of my HP ProCurve switch. It tries to be smart about DoS protection and has the innocent looking flag Auto DoS set:\nA direct connection is possible as soon as the feature Auto DoS is disabled.\n","permalink":"https://nblock.org/2022/07/03/direct-tailscale-connections-with-a-hp-procurve-switch/","summary":"A single setting on a HP ProCurve switch prevents direct connections on a Tailscale network.","title":"Direct Tailscale connections with a HP ProCurve switch"},{"content":"Background and introduction I\u0026rsquo;m currently working on extracting metrics from multiple Fronius Symo inverters. There are essentially two approaches to solve this problem:\nConnect the inverter with the Internet and upload everything to SolarWeb, a proprietary metrics platform hosted by Fronius. Use the Fronius SolarWeb mobile app to view the data. Operate the device offline and collect the metrics yourself. Fronius offers a JSON based API to query realtime and archive data from an inverter. Furthermore, the device also offers a push service, where the inverter can upload its metrics continuously to a FTP server or send it to a HTTP endpoint. Both methods can be used without in Internet connection. As you might have guessed, I implemented the second approach where each inverter pushes its metrics continuously to a server on the local network. From there, it is picked up and imported into InfluxDB. Grafana is used to visualize the metrics. Please contact me, if you are interested in how to get collection, transfer to InfluxDB and visualization up- and running for Fronius inverters.\nLost date and time After a few days of metrics collection, I noticed that one of the inverters regularly loses its local date and time. Getting timeseries data with a timestamp of 2000-01-01T05:02:15 is not very helpful.\nThe inverter\u0026rsquo;s webinterface offers to set the system time and also has a checkbox for \u0026ldquo;Set time automatically\u0026rdquo;.\nGreat, simply enable automatic time synchronization and allow outgoing NTP traffic (and DNS). As it turns out, the inverter does not use NTP to synchronize its system time. It requires the user to enable the metrics upload to SolarWeb in order to synchronize its system time!\nI did not want to re-implement parts of the proprietary, UDP based protocol (PCAP dumps are available upon request), so I decided to check the API docs for endpoints related to date and time settings. Unfortunately, I could not find such an endpoint.\nThe next approach was to inspect the requests of the web browser and rebuild the necessary requests in curl. Put those requests in a script and invoke it regularly. The steps are easy:\nAuthenticate with the device (only HTTP Digest auth is supported) Set date and time As it turns out, the HTTP Digest implementation does not conform to the relevant RFC 7235. From section 4.1:\nA server generating a 401 (Unauthorized) response MUST send a WWW-Authenticate header field containing at least one challenge. A server MAY generate a WWW-Authenticate header field in other response messages to indicate that supplying credentials (or different credentials) might affect the response.\nThe datalogger does not respond with a WWW-Authenticate header, but instead sends a X-WWW-Authenticate header, which breaks the authentication workflow in curl. As there is no way to override the expected HTTP header in curl, I decided to implement (the broken) Fronius HTTP Digest authentication myself in Python. The workaround is easy, subclass the HTTPDigestAuth class from Requests and fixup the header name before Requests reads the header value.\nfrom requests.auth import HTTPDigestAuth class HTTPDigestAuthFronius(HTTPDigestAuth): def handle_401(self, r, **kwargs): # Replace www-authenticate unconditionally r.headers[\u0026#34;www-authenticate\u0026#34;] = r.headers.get(\u0026#34;x-www-authenticate\u0026#34;) return super().handle_401(r, **kwargs) The entire script is available here (tested on Python 3.9 with Requests 2.25).\nConclusion Please use open standards and established protocols for common tasks such as NTP for keeping the system time in sync. Furthermore, don\u0026rsquo;t mess with the standards, just implement/use them as-is and test them with standard tools such as curl.\n","permalink":"https://nblock.org/2021/04/09/sync-date-and-time-on-an-offline-fronius-datalogger/","summary":"What happens when one does not use open standards for common tasks.","title":"Sync date and time on an offline Fronius Datalogger"},{"content":"Almost four years after the initial commit, I\u0026rsquo;m really happy to announce the first release of Feeds! Feeds provides Atom/RSS feeds in times of social media and paywall. From the documentation:\nOnce upon a time every website offered an RSS feed to keep readers updated about new articles/blog posts via the users’ feed readers. These times are long gone. The once iconic orange RSS icon has been replaced by “social share” buttons.\nFeeds aims to bring back the good old reading times. It creates Atom feeds for websites that don’t offer them (anymore). It allows you to read new articles of your favorite websites in your feed reader (e.g. TinyTinyRSS) even if this is not officially supported by the website.\nFeeds is able to create full text Atom feeds for many different sites. Head over to the list of supported websites to see if your favourite site is supported.\nYou may install Feeds directly from PyPi (please note that the name on PyPi is different):\n$ pip install PyFeeds $ feeds crawl orf.at Have a look at the quickstart section for more usage information.\nSource Code: https://github.com/PyFeeds/PyFeeds PyPi: https://pypi.org/project/PyFeeds/ Documentation: https://pyfeeds.readthedocs.io While I started the project back in 2016, it was Lukas that really kept the project going and contributed most of the code to it. Thank you and all other contributors very much!\n","permalink":"https://nblock.org/2020/05/16/feeds-2020.5.16/","summary":"Feeds 2020.5.16 is now available.","title":"Feeds 2020.5.16"},{"content":"I got my hands on an OpenLDAP instance which started to exist sometime around 2004. The instance was upgraded several times and was quite unstable. It crashed seemingly at random when some users logged in on an LDAP enabled system. The only thing that popped up consistently during those crashes was the password policy overlay (ppolicy). Turning it off made the crashes disappear. As the password policy overlay is required by the customer, disabling it was just a temporary solution.\nThe first step was to reproduce the crash. It turned out, that enabling password authentication in OpenSSH while using nslcd triggered the assertion reliably. When a crash occurs, one can find the following line in OpenLDAP\u0026rsquo;s logs:\nslapd: ppolicy.c:912: ctrls_cleanup: Assertion `rs-\u0026gt;sr_ctrls != NULL` This assertion was reported several times and one of the reports was closed with the comment:\nThis turned out to be a configuration issue. Closing this out as NOTABUG.\nUnfortunately, the solution was not posted and it is hidden somewhere behind RedHat\u0026rsquo;s commercial support website.\nAfter the usual GDB session without much success, I decided to review the configuration of this particular instance:\n$ cd /etc/ldap/slapd.d $ sudo grep -ri \u0026#34;ppolicy\u0026#34; ... cn=config/olcDatabase={1}mdb/olcOverlay={2}ppolicy.ldif:dn: olcOverlay={3}ppolicy cn=config/olcDatabase={1}mdb/olcOverlay={2}ppolicy.ldif:objectClass: olcPPolicyConfig cn=config/olcDatabase={1}mdb/olcOverlay={2}ppolicy.ldif:olcOverlay: {3}ppolicy cn=config/olcDatabase={1}mdb/olcOverlay={2}ppolicy.ldif:olcPPolicyDefault: cn=default,ou=... cn=config/olcDatabase={1}mdb/olcOverlay={2}ppolicy.ldif:structuralObjectClass: olcPPolicyConfig cn=config/olcDatabase={1}mdb/olcOverlay={3}ppolicy.ldif:dn: olcOverlay={3}ppolicy cn=config/olcDatabase={1}mdb/olcOverlay={3}ppolicy.ldif:objectClass: olcPPolicyConfig cn=config/olcDatabase={1}mdb/olcOverlay={3}ppolicy.ldif:olcOverlay: {3}ppolicy cn=config/olcDatabase={1}mdb/olcOverlay={3}ppolicy.ldif:olcPPolicyDefault: cn=default,ou=... cn=config/olcDatabase={1}mdb/olcOverlay={3}ppolicy.ldif:structuralObjectClass: olcPPolicyConfig ... As one can see, the ppolicy overlay is referenced twice and the fix is quite easy: Remove the second ppolicy reference:\n$ sudo systemctl stop slapd $ sudo rm cn\\=config/olcDatabase\\=\\{1\\}mdb/olcOverlay\\=\\{3\\}ppolicy.ldif $ sudo slaptest -F /etc/ldap/slapd.d config file testing succeeded $ sudo systemctl start slapd The instance is now operating reliably.\n","permalink":"https://nblock.org/2020/01/27/assert-in-openldap-with-password-policy/","summary":"Fixing an assert OpenLDAP\u0026rsquo;s password policy overlay","title":"Assert in OpenLDAP with password policy overlay"},{"content":"While debugging an issue with OpenLDAP, I noticed that journalctl colors its output. From the man page:\nWhen outputting to a tty, lines are colored according to priority: lines of level ERROR and higher are colored red; lines of level NOTICE and higher are highlighted; lines of level DEBUG are colored lighter grey; other lines are displayed normally.\nWhile this is nice for higher priority levels such as ERROR, it is not useful for messages with DEBUG priority. Those appear in light grey which makes them hard to read on terminal themes with a light background.\nThe easiest way is to set the environment variable SYSTEMD_COLORS which overrides automatic coloring. To follow the OpenLDAP debug log:\n$ sudo SYSTEMD_COLORS=false journalctl -u slapd -f ","permalink":"https://nblock.org/2020/01/27/temporarily-disable-journalctl-output-coloring/","summary":"Temporarily disable journalctl output coloring.","title":"Temporarily disable journalctl output coloring"},{"content":"I\u0026rsquo;m currently working on a Tryton upgrade from 4.2 to 5.0 (via intermediate releases 4.4, 4.6 and 4.8). Tryton 5.0 is the first long term support release with support for 5 years. A useful property for an ERP system. In order to test each of the intermediate versions, I decided to quickly spawn a virtualenv and install the Tryton GTK client in there.\n$ virtualenv -p python2 venv-tryton-4.4 $ . venv-tryton-4.4/bin/activate $ pip install tryton~=4.4 $ tryton Traceback (most recent call last): File \u0026#34;~/venv-tryton-4.4/bin/tryton\u0026#34;, line 48, in \u0026lt;module\u0026gt; from tryton import client File \u0026#34;~/venv-tryton-4.4/local/lib/python2.7/site-packages/tryton/client.py\u0026#34;, line 17, in \u0026lt;module\u0026gt; import pygtk ImportError: No module named pygtk Installing pygtk and its dependencies in a virtualenv is not without its problems. But, there is a rather quick (and hackish) solution to this problem. This is only suitable for quick tests and experiments. Use proper Debian packages on production systems.\nPython 2.7 $ virtualenv -p python2 venv-tryton-4.4 $ . venv-tryton-4.4/bin/activate $ pip install tryton~=4.4 $ pip install PyGObject $ ln -s /usr/lib/python2.7/dist-packages/pygtk.py $VIRTUAL_ENV/lib/python2.7/site-packages $ ln -s /usr/lib/python2.7/dist-packages/gtk-2.0 $VIRTUAL_ENV/lib/python2.7/site-packages $ ln -s /usr/lib/python2.7/dist-packages/gobject $VIRTUAL_ENV/lib/python2.7/site-packages $ ln -s /usr/lib/python2.7/dist-packages/glib $VIRTUAL_ENV/lib/python2.7/site-packages $ ln -s /usr/lib/python2.7/dist-packages/gi $VIRTUAL_ENV/lib/python2.7/site-packages $ ln -s /usr/lib/python2.7/dist-packages/pygtkcompat $VIRTUAL_ENV/lib/python2.7/site-packages $ tryton Obviously, GTK 2 and the respective Python bindings must be installed.\nPython 3.x $ virtualenv -p python3 venv-tryton-4.8 $ . venv-tryton-4.8/bin/activate $ pip install tryton~=4.8 $ pip install PyGObject $ tryton By the way: Tryton 5.0 and later versions will only support Python 3.x and GTK 3.\nTested on Debian Testing with Python 2.7.15, Python 3.6.6 and Tryton 4.4, 4.6, 4.8 and 5.0.\n","permalink":"https://nblock.org/2018/09/27/tryton-gtk-client-inside-a-virtualenv/","summary":"Setting up Tryton\u0026rsquo;s GTK client inside a virtualenv.","title":"Tryton GTK client inside a virtualenv"},{"content":"An API is an important part of an application when it comes to automation. For those web applications that do not offer an API, I typically write some small application on top of Scrapy or Requests to solve the problem.\nThe application in question is an old PHP application. There is no API available and adding an API to it is out of scope. MySQL is used as the database and most of the 79 tables are stored with MyISAM. Recently added tables are stored with InnoDB. There are just a few unique constraints on the database side and no foreign key constraints (due to MyISAM). The entire logic is coded into the PHP application.\nThe web application\u0026rsquo;s database can be accessed directly, so there is no need for scraping. The following SQL listing provides a minimal working example for the purpose of this blog post.\n-- create tables CREATE TABLE IF NOT EXISTS `hosts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `state` int(1) NOT NULL DEFAULT \u0026#39;1\u0026#39;, `os_id` int(11) NOT NULL DEFAULT \u0026#39;1\u0026#39;, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; CREATE TABLE IF NOT EXISTS `os_types` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- stuff some sample data into the tables INSERT INTO `hosts` (`id`, `name`, `state`, `os_id`) VALUES (1, \u0026#39;astoria\u0026#39;, 1, 3), (2, \u0026#39;fiddle\u0026#39;, 2, 2), (3, \u0026#39;freeman\u0026#39;, 4, 3), (4, \u0026#39;liard\u0026#39;, 4, 3), (5, \u0026#39;leand\u0026#39;, 1, 1), (6, \u0026#39;algar\u0026#39;, 1, 2), (7, \u0026#39;ells\u0026#39;, 3, 2); INSERT INTO `os_types` (`id`, `name`) VALUES (1, \u0026#39;Debian Wheezy\u0026#39;), (2, \u0026#39;Debian Jessie\u0026#39;), (3, \u0026#39;Debian Stretch\u0026#39;); There are two tables, hosts and os_types. The rows of the os_types table are referenced via hosts.os_id inside the PHP application. There is no immediate connection on the database level. The hosts table contains a state column with the following magic numbers:\n1: active 2: disabled 3: unknown 4: deleted The task is simple: Connect to the database and print the name, state and the name of the OS.\nTry #1 I don\u0026rsquo;t want to write SQL by hand and I certainly don\u0026rsquo;t want to remember all the magic numbers. So, I decided to give SQLAlchemy, a popular Object Relational Mapper for Python, another try. The typical usage is to define your model in plain Python and let SQLAlchemy manage the database side for you. This is convenient for new projects or if the database has 5 tables in total. The database of this application manages 79 tables and some of them contain a lot of columns (e.g. 26 columns for a single table). That\u0026rsquo;s too much typing. Fortunately, SQLAlchemy offers a feature called Automap where it connects to a database, inspects the tables and tries to figure out the models for you. OK, now some code:\nfrom sqlalchemy import create_engine from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session # The declarative base used for the SQLAlchemy reflection. Base = automap_base() def main(): engine = create_engine(\u0026#39;mysql://USER:PASS@HOST:PORT/DATABASE\u0026#39;) # Perform automap and create a session Base.prepare(engine, reflect=True) session = Session(engine) # Use the session Hosts = Base.classes.hosts OSTypes = Base.classes.os_types for host in session.query(Hosts).filter_by(state=1).all(): os_type = session.query(OSTypes).get(host.os_id) print(host.name, host.state, os_type.name) if __name__ == \u0026#34;__main__\u0026#34;: main() Run it:\n$ python sqlalchemy1.py astoria 1 Debian Stretch leand 1 Debian Wheezy algar 1 Debian Jessie It works but there are some obvious limitations. SQLAlchemy was not able to figure out the relationship between hosts.os_id and os_types.id. So the programmer has to manage the relationship by hand (this happens in the PHP application). This is not only cumbersome but also error prone as one could just set os_id to an unreferenced value. Let\u0026rsquo;s try to fix that.\nTry #2 One can provide some hints for SQLAlchemy in order to build up a relationship. Take a look at the following version:\nfrom sqlalchemy import Column from sqlalchemy import ForeignKey from sqlalchemy import Integer from sqlalchemy import create_engine from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy.orm import relationship # The declarative base used for the SQLAlchemy reflection. Base = automap_base() class Hosts(Base): __tablename__ = \u0026#39;hosts\u0026#39; # custom types os_id = Column(Integer, ForeignKey(\u0026#39;os_types.id\u0026#39;)) # relationships os = relationship(\u0026#39;os_types\u0026#39;, backref=\u0026#39;hosts\u0026#39;) def main(): engine = create_engine(\u0026#39;mysql://USER:PASS@HOST:PORT/DATABASE\u0026#39;) # Perform automap and create a session Base.prepare(engine, reflect=True) session = Session(engine) # Use the session for host in session.query(Hosts).filter_by(state=1).all(): print(host.name, host.state, host.os.name) if __name__ == \u0026#34;__main__\u0026#34;: main() Run it:\n$ python sqlalchemy2.py astoria 1 Debian Stretch leand 1 Debian Wheezy algar 1 Debian Jessie I just added a Hosts class that maps to the hosts table. It adds a ForeignKey to the os_id column and a relationship named os. Usage is much simpler now: it boils down to a single query and there is no need to look up the name of the operating system by hand. But still, there is a magic number in use (state=1).\nTry #3 As noted above, the state column has 4 known values. This pretty much looks like an enum. SQLAlchemy has another nice feature for this particular use case: custom types. The following version replaces the magic numbers with a custom type implemented as Python enum:\nimport enum from sqlalchemy import Column from sqlalchemy import ForeignKey from sqlalchemy import Integer from sqlalchemy import TypeDecorator from sqlalchemy import create_engine from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy.orm import relationship # The declarative base used for the SQLAlchemy reflection. Base = automap_base() @enum.unique class HostState(enum.IntEnum): # This host is active and currently in use. ACTIVE = 1 # It is no longer in use, it is turned off. DISABLED = 2 # Unknown, literally UNKNOWN = 3 # This host is gone, away forever. DELETED = 4 class HostStateTypeDecorator(TypeDecorator): impl = Integer def process_bind_param(self, value, dialect): if isinstance(value, HostState): value = value.value return value def process_result_value(self, value, dialect): if value is not None: value = HostState(value) return value class Hosts(Base): __tablename__ = \u0026#39;hosts\u0026#39; # custom types state = Column(HostStateTypeDecorator) os_id = Column(Integer, ForeignKey(\u0026#39;os_types.id\u0026#39;)) # relationships os = relationship(\u0026#39;os_types\u0026#39;, backref=\u0026#39;hosts\u0026#39;) def main(): engine = create_engine(\u0026#39;mysql://USER:PASS@HOST:PORT/DATABASE\u0026#39;) # Perform automap and create a session Base.prepare(engine, reflect=True) session = Session(engine) # Use the session for host in session.query(Hosts).filter_by(state=HostState.ACTIVE).all(): print(host.name, host.state, host.os.name) if __name__ == \u0026#34;__main__\u0026#34;: main() Run it:\n$ python sqlalchemy3.py astoria HostState.ACTIVE Debian Stretch leand HostState.ACTIVE Debian Wheezy algar HostState.ACTIVE Debian Jessie Sweet, the magic numbers are now gone and one can build a query using the custom type: state=HostState.ACTIVE. Magic numbers are converted in both directions via process_result_value() and process_bind_param().\nTested with Python 3.6.4 and SQLAlchemy 1.2.6.\n","permalink":"https://nblock.org/2018/04/03/sqlalchemy-automap-and-custom-types/","summary":"Using SQLAlchemy\u0026rsquo;s Automap along with custom types for convenient data access.","title":"SQLAlchemy automap and custom types"},{"content":"I am transitioning GPG keys from my old 4096-bit RSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time, but I prefer all new correspondance to be encrypted in the new key, and will be making all signatures going forward with the new key.\nHere is my transition statement:\n-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I am transitioning GPG keys from my old 4096-bit RSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time, but I prefer all new correspondance to be encrypted in the new key, and will be making all signatures going forward with the new key. This transition document is signed with both keys to validate the transition. If you have signed my old key, I would appreciate signatures on my new key as well, provided that your signing policy permits that without reauthenticating me. The old key, which I am transitional away from, is: pub rsa4096 2013-02-23 [SC] [expires: 2018-02-22] Key fingerprint = 89C9 5CF0 871D 6EC1 0A3F ECD9 741E 93C2 2741 5CF9 The new key, to which I am transitioning, is: pub rsa4096 2018-02-17 [SC] [expires: 2021-02-16] Key fingerprint = 65D0 A6E4 6387 883E C3B5 E78C D67A 997E FEA3 D7C1 To fetch the full new key from a public key server using GnuPG, run: gpg --recv-keys D67A997EFEA3D7C1 If you have already validated my old key, you can then validate that the new key is signed by my old key: gpg --check-sigs D67A997EFEA3D7C1 If you then want to sign my new key, a simple and safe way to do that is by using caff (shipped in Debian as part of the \u0026#34;signing-party\u0026#34; package) as follows: caff D67A997EFEA3D7C1 Find contact details at https://nblock.org/about if you have any questions about this document or this transition. Florian Preinstorfer 17-02-2018 -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEy0qC0zkB+ER+IRLzPkJ06FLyrlMFAlqINJUACgkQPkJ06FLy rlOwZg//QH4s6rRhcHbLx0ASr+LuqbJcqLGuZZxC/TcX0RTBVK/6EOzxcw/HFYpU P0Hb65GYeVWHAnPMqp8t+lL6dhgCjE4AtddlXX+6T0Pusuh5SPJ23ztymq7DQwF+ giC1eRYWF0tpBbEpUuTqgFAkeMi3wRJsDCuh2z542rgv3EcjeS7u/j0RsBKRFLbB FRC+YbIWEjLURUVzpYYqlEOq/3Q5x0p+JF3oyUG4Uz8hfToM0CxzS22GUgoBUqfg 1rob57ovt8kK+bp1IdWz/3T4GS2xWvQZljSQ4xWHySqCW4Yspej/L3cFToacdVM/ kMWOh25kpZrE1GQP2LVUK7YsbAa9jvjtBg2KSQw7nopor8RD1WuW3ztW71NVEkBB Qd3EW0bIBur7CeWxg8m6lJfUY67Dymx/fHXiaLH6wJ1eVP0JzlUpWqhFhqTfWSQG e77MXwR5oAbKUghYzB+gNO6g3xOVTlTV1THpRuD/fI6acKdxKcOioFJg6IOP6X13 sj7QPG0Ze5xBAdGoYmNzRSEdgbgT61Mk/gu2cKXJ4050xpBQw2FhFHGLi+SwHvuv w9nE+V9l8hJHufcn2YUpPSUJcneTET0JtLLpUTxjG+GjVCByBdM98MDr7nyb7NQc 6Sz9fF9nUunwvZZbPjI8skvCnNfOYLui9j2mhCLumqSbnkqMlQqJAjMEAQEKAB0W IQSjDdzD+eoe0FJE12Ih7BDxHIXZfAUCWog0lQAKCRAh7BDxHIXZfFtHD/0chSc8 x2tHHHC/EnAgc52vAEi85ZPkX6TYiDTT/rEO1U5EB45CDf1MZ5wMngqAI/3b0TeW JMuESwcbCu4CmLkPJhznnJgFzZ9pdnautzYoqwwTujX/Y4YvQzb8HoJmARK1A/JF +JtE0mpuWQen3tJC5hyRgd798+lXqFbNSnsFJn/1UCuBBPKhVpJERyWukQuXWASe SW61n4xWQoxIGTzI/AWm2KE/pe8O7m/eyj10I5HQj2r0eMkWuqHjdHf8+X+GcRoo P3RmO7bhwPNbyx6yGeegykWavw+xQxSUKHlLfUzRY2cz4t5Oj9zKwHflEkeZeQYJ VPNZWdtvEc3PmLzbOUtEJmT905I6mWgCyX1ES3AT+aYG1TWHYEss0v8olc6zqcqX v4yrjlJ4DjbZ7dcmEKAeSvi4M7l3lEZTw9Z3YENFsjzUpQxW0Gb6V24c3Sy6zBgE Ffo4ltPQp4Wg0+PdrYKAsaWNNgPhKzm69m24AI+jHIgZDIF546ySNc5SxaDfp7Xy AjgfvjhQI33uWq9yLea/RN9g0C4OBttuUJbAuKOcCykOIVfuvYkCTOnDzn2MFJPr lSq2oOexypJgUPFR6RRNvdP0WB/hJjPD5kY+X/A35kl/+/JV/dabUfzwnAaVcvEt OpjKeOTPQaYNItLd8OPR8BBr7/QvWDgbuu1mwg== =Zctm -----END PGP SIGNATURE----- You can also download the above statement from here. Use the following commands to verify the integrity of the transition statement:\n$ gpg --recv-keys D67A997EFEA3D7C1 $ curl https://nblock.org/2018/02/17/new-gnupg-key/gpg-transition-statement-D67A997EFEA3D7C1.txt.asc | gpg --verify ","permalink":"https://nblock.org/2018/02/17/new-gnupg-key/","summary":"GnuPG key transition statement","title":"GnuPG key transition statement"},{"content":"My home network consists of a wireless router running LEDE 17.01 and a ZTE MF831 LTE USB modem for Internet connectivity. From time to time the Internet connection fails and the only way to recover was to physically reconnect the USB modem. So each time it failed, I had to get to my wireless router, pull the USB modem and reconnect it. This post describes the steps I took to work around this issue.\nThe lost connection doesn\u0026rsquo;t seem to follow a pattern. Sometimes it happens every day and sometimes the connection works for weeks without any issues. But still, the problem exists and when it kicks in, the system is not able to recover itself. When the connection dies, logread contains the following log entries:\n[snipped] daemon.info pppd[9626]: No response to 5 echo-requests daemon.notice pppd[9626]: Serial link appears to be disconnected. daemon.info pppd[9626]: Connect time 39.7 minutes. daemon.info pppd[9626]: Sent 90740877 bytes, received 878872777 bytes. daemon.notice netifd: Network device \u0026#39;3g-provider\u0026#39; link is down daemon.notice netifd: Interface \u0026#39;provider\u0026#39; has lost the connection daemon.warn dnsmasq[1466]: no servers found in /tmp/resolv.conf.auto, will retry daemon.info odhcpd[954]: Using a RA lifetime of 0 seconds on br-lan daemon.notice pppd[9626]: Connection terminated. daemon.notice pppd[9626]: Modem hangup daemon.info pppd[9626]: Exit. daemon.notice netifd: Interface \u0026#39;provider\u0026#39; is now down daemon.notice netifd: Interface \u0026#39;provider\u0026#39; is setting up now daemon.notice netifd: provider (9858): comgt 12:02:15 -\u0026gt; -- Error Report -- daemon.notice netifd: provider (9858): comgt 12:02:15 -\u0026gt; ----\u0026gt; ^ daemon.notice netifd: provider (9858): comgt 12:02:15 -\u0026gt; Error @118, line 9, Could not \\ write to COM device. (1) daemon.notice netifd: provider (9858): daemon.notice pppd[9873]: pppd 2.4.7 started by root, uid 0 local2.info chat[9875]: abort on (BUSY) local2.info chat[9875]: abort on (NO CARRIER) local2.info chat[9875]: abort on (ERROR) local2.info chat[9875]: report (CONNECT) local2.info chat[9875]: timeout set to 10 seconds local2.info chat[9875]: send (AT\u0026amp;F^M) local2.info chat[9875]: alarm local2.info chat[9875]: -- write timed out local2.err chat[9875]: Failed daemon.err pppd[9873]: Connect script failed [snipped] Interestingly, the devices in /dev/ttyUSB* are still there and the logs don\u0026rsquo;t contain anything USB related.\nPPPD notices that the serial connection with the modem is broken and shuts down. Simply restarting the interface afterwards (ifdown/ifup, web interface) does not work. The first step of the workaround is to restart the USB modem via software. Fortunately, this Stack Exchange post pointed me into the right direction. A simple unbind followed by a bind on the correct USB port works fine. On unbind, the modem disappears and all /dev/ttyUSB* devices are removed by the kernel. On bind, the kernel re-initializes the modem, does some mode switching and a few seconds later, the /dev/ttyUSB* devices reappear. After this unbind/bind cycle, PPPD is started automatically and Internet connectivity is restored. A list of USB ports may be obtained via:\n# find /sys/bus/usb/devices/ /sys/bus/usb/devices/ /sys/bus/usb/devices/1-1 /sys/bus/usb/devices/usb1 /sys/bus/usb/devices/usb2 /sys/bus/usb/devices/1-0:1.0 /sys/bus/usb/devices/1-1:1.0 /sys/bus/usb/devices/1-1:1.1 /sys/bus/usb/devices/1-1:1.2 /sys/bus/usb/devices/2-0:1.0 There is one problem though, I want the reconnection steps to trigger automatically when PPPD detects that the serial link stopped working. Fortunately, PPPD offers various hooks that one can leverage. In my case, the ip-down hook is the correct one. It is called with various arguments and with some environment variables. To enable a ip-down hook on OpenWRT/LEDE, create the directory /etc/ppp/ip-down.d and place your executable ip-down script in this directory. All ip-down scripts in /etc/ppp/ip-down.d executed each time PPPD had a working IP connectivity and is in the process of shutting down. The last part of the puzzle is to only trigger the reconnection when the serial link is faulty. Especially, do not trigger when:\nThe user requested to shutdown the interface (ifdown, web interface). The USB modem is physically disconnected. The complete solution is the shell script listed below. It leverages the OpenWRT/LEDE logging system and the fact that PPPD sets the environment variable PPPD_PID. I only need to inspect log entries produced by the currently running PPPD and find log entries that indicate a faulty serial link.\n#!/bin/sh # pppd ip-down script to reset a USB LTE Modem when the serial link is faulty. # The USB port where the USB LTE modem is connected. USB_DEVICE_ADDRESS=\u0026#34;1-1\u0026#34; # Try to find out why pppd is shutting down and only reset the device when the # serial link is faulty. Exit early otherwise. Luckily, pppd provides us with # some environment variables/arguments that we can leverage: # - PPPD_PID -\u0026gt; The PID of the *calling, currently running* pppd. # Exit if we are not called by pppd. [ -z \u0026#34;$PPPD_PID\u0026#34; ] \u0026amp;\u0026amp; exit 0 # pppd logs that a certain amount of echo-requests sent to the device failed. if ! logread | grep -q \u0026#34;pppd\\[$PPPD_PID\\]: No response to .\\+ echo-requests\u0026#34;; then exit 0 fi # pppd also logs that the serial link appears to be disconnected. if ! logread | grep -q \u0026#34;pppd\\[$PPPD_PID\\]: Serial link appears to be disconnected\u0026#34;; then exit 0 fi # Reset the device logger \u0026#34;Reset USB device at address $USB_DEVICE_ADDRESS\u0026#34; echo \u0026#34;$USB_DEVICE_ADDRESS\u0026#34; \u0026gt; /sys/bus/usb/drivers/usb/unbind sleep 1 echo \u0026#34;$USB_DEVICE_ADDRESS\u0026#34; \u0026gt; /sys/bus/usb/drivers/usb/bind logger \u0026#34;Reset complete\u0026#34; ","permalink":"https://nblock.org/2017/09/19/automatically-recover-a-failing-usb-lte-modem/","summary":"Automatically recover a failing LTE ZTE MF831 USB modem on OpenWRT/LEDE","title":"Automatically recover a failing USB LTE modem"},{"content":"Images in presentations tend to be very similar to each other. For example a base image visualizes an empty sequence diagram and with each slide in the presentation more and more items are added to the sequence diagram (e.g. slides for my talk at Grazer Linuxtage 2017). Inkscape is a nice solution to draw such diagrams because it allows to put the various steps on different layers. Different images may be generated by selectively showing/hiding layers before running the export process.\nUnfortunately, Inkscape (version \u0026lt;= 0.92) does not provide a command line option where the user can pass a list of layers that should be visible in the exported image. One can export such images by hand using the Inkscape GUI but this is an error prone and repetitive process and should be automated. Since SVG is just an XML file, one can use other XML tools to perform the desired processing before exporting.\nSuppose you have an SVG file with the following layers (labels):\nbase transport-1 transport-2 transport-3 In order to create a new SVG file where only the layers base and transport-1 are visible, the following command may be used:\n$ xmlstarlet ed -P \\ -N inkscape=http://www.inkscape.org/namespaces/inkscape \\ -N svg=http://www.w3.org/2000/svg \\ -u \u0026#39;//*/svg:g[@inkscape:label]/@style\u0026#39; -v display:none \\ -u \u0026#39;//*/svg:g[@inkscape:label=\u0026#34;base\u0026#34;]/@style\u0026#39; -v display:inline \\ -u \u0026#39;//*/svg:g[@inkscape:label=\u0026#34;transport-1\u0026#34;]/@style\u0026#39; -v display:inline \\ input.svg \u0026gt; output.svg The command above loads SVG namespaces, selects objects with by using an XPath expression and updates the display attribute to the desired value. The first step is to select all labels and hide them. After that, the labels base and transport-1 are selected to display them.\nThe resulting SVG file can be converted to another format such as PDF using Inkscape:\n$ inkscape --without-gui --export-area-page --export-pdf-version=\u0026#34;1.5\u0026#34; \\ --export-pdf=output.pdf output.svg Hide this somewhere in a Makefile and you\u0026rsquo;re done.\n","permalink":"https://nblock.org/2017/05/07/export-multiple-svg-layers/","summary":"How to export multiple layers of an SVG file using xmlstarlet and Inkscape.","title":"Export multiple SVG layers"},{"content":"Tryton and its modules provide a plethora of wizards that heavily rely on various date and time fields used in records. This blog post aims to describe the relation between different date and time fields in Tryton along with their impact on wizards.\nDate and time fields and their meaning This section describes the various date and time fields for selected modules.\nProducts/Variants Products → Customers Lead Time: The time required to provide this product for a customer. Example: 1d for next day delivery. Example: 21d for delivery in 3 weeks (= 21 days). Products → Suppliers → Supplier Lead Time (per supplier): The time a supplier needs to provide this product. Example: 1d for next day delivery. Example: 14d for delivery in two weeks (= 14 days). Variants → Production → Lead Times Lead time (per Bill of Material): The time required to produce this product using the assigned BOM (Bill of Material). Example: 3d if the production team requires 3 days to produce the product. Purchase Purchase Purchase Date: The date when a purchase is commissioned. Purchase Requests Best Purchase Date: The best date to commission a purchase. It is determined automatically based on various lead times. The purchase should be commissioned on this date to avoid delays on downstream processes (Production, Sales). Expected Supply Date: The date when a shipment is expected. This field is calculated based on supplier lead times. Inventory \u0026amp; Stock Customer Shipments Planned Date: The customer shipment is planned for this date. This field is also displayed as shipping date on sale lines. Effective Date: The customer shipment was fulfilled on this date. Supplier Shipments Planned Date: The supplier shipment is awaited for this date. Effective Date: The supplier shipment was delivered on this date. Production Planned Date: The completion of the production is planned for this date. Planned Start Date: The production should be started on this date to finish on time (Planned Date = Planned Start Date + Production Lead Time) Sales Sale Date: The sale is done on this date. The customer signed the quotation. Shipping Date (per sale line): The date of the earliest possible shipment. This field is calculated automatically. Times in context The figure below illustrates the context between different times when a device is sold.\nA device is produced using a BOM comprised of these parts:\nPurchase Part 1 (purchasable) Production Part 1 (producible via BOM) Production Part 2 (producible via BOM) The times are calculated as follows:\nThe Sale Date is determined by the customer. The Shipping Date is calculated using the Sale Date and the Lead Time (Products → Customers). The Production Start Date for Device is the Shipping Date minus Lead Time (Variants → Production). The Purchase Date for Purchase Part 1 is calculated based on Production Start Date for Device minus Lead Time (Products → Suppliers). The Production Start Date for Production Part 1 is calculated based on Production Start Date for Device minus Lead Time (Variants → Production). The Production Start Date for Production Part 2 is calculated based on Production Start Date for Device minus Lead Time (Variants → Production). Concrete Example The figure below uses concrete dates for the generic example above.\nThe following times are used:\nProducts → Customers: The lead time for Device is 4 weeks. Variants → Production: The lead time to produce Device is 4 days. Variants → Suppliers: The lead time to purchase Purchase Part 1 is 1 week. Variants → Production: The lead time to produce Production Part 1 is 2 days. Variants → Production: The lead time to produce Production Part 2 is 2 weeks. The following dates are calculated:\nThe Device is sold on 29.03. The Shipment Date is therefore 26.04. The production of Device starts on 21.04. and ends on 25.04. The best purchase date for Purchase Part 1 is calculated as 14.04. The shipment is awaited on 21.04. The production of Production Part 1 starts on 18.04. and ends on 20.04. The production of Production Part 2 starts on 06.04. and ends on 20.04. Tryton uses a 1 day buffer period for productions. Conclusion This blog post is based on an internal wiki article and is used to educate staff about various times in Tryton. It might be incomplete as it only considers a few modules. Feel free to contact me in case there are mistakes or you are missing certain fields.\nThanks to Wolfgang Silbermayr for his valuable feedback.\n","permalink":"https://nblock.org/2017/04/11/times-in-tryton/","summary":"The relation between date and time fields in Tryton.","title":"Times in Tryton"},{"content":"The Tryton GTK client provides numerous keyboard shortcuts to speed up data entry in widgets. This post aims to document them along with their pitfalls.\nDate, datetime, and time widgets The client documentation covers keyboard shortcuts for those widgets, too. The rules are:\nLower case letters increase the value by one unit. Upper case letters decrease the value by one unit. The following keyboard shortcuts are defined:\nIncrease Decrease Operation s S ± 1 second i I ± 1 minute h H ± 1 hour d D ± 1 day w W ± 1 week m M ± 1 month y Y ± 1 year Timedelta widgets Some fields, such as lead time on supplier use timedelta widgets. They also support shortcuts, but they are different depending on the user interface language.\nThe following keyboard shortcuts are defined for English and German:\nEN DE Operation s s 1 second m m 1 minute h h 1 hour d t 1 day w W 1 week (= 7 days) M M 1 month (= 30 days) Y J 1 year (=365 days) Here are a few usage examples for English:\n2Y: 2 years 1M: 1 months 2w: 2 weeks 3d: 3 days 5h or 05:00: 5 hours 45m or 00:45: 45 minutes 1d 3h 10m: 1 day, 3 hours and 10 minutes See the implementation for Tryton 4.2 along with the german translation.\n","permalink":"https://nblock.org/2017/04/10/keyboard-shortcuts-in-tryton/","summary":"Keyboard shortcuts for date, datetime, time, and timedelta widgets in Tryton.","title":"Tryton keyboard shortcuts"},{"content":"The Tryton developers released a new version of Tryton a few days ago. We try to keep up with the current stable version, so an upgrade is required. This posts lists the subtle changes that caused problems with our installation.\nBefore the upgrade Custom and third-party modules Update dependencies to Tryton 4.2. Tryton 4.2 reworked the translation system which requires translation files to be renamed. For all languages, rename the .po files as follows (See issue5443): en_US.po → en.po de_DE.po → de.po … Reports Reports use English as fallback language in case no other language is defined. Due to the translation updates one needs to use en as fallback instead of en_US (See: issue5443). Module: Party The field vat_code was renamed to tax_identifier. Adapt to this change in case you are using this value in a report.\nThe full address of a party now includes the party name. In our case the party name is duplicated on reports and we decided to go with this minimal patch for the party module to restore the old behaviour.\n--- a/address.py 2016-12-13 10:16:26.720162514 +0100 +++ b/address.py 2016-12-13 10:16:34.872140410 +0100 @@ -143,8 +143,6 @@ } if context.get(\u0026#39;address_from_country\u0026#39;) == self.country: substitutions[\u0026#39;country\u0026#39;] = \u0026#39;\u0026#39; - if context.get(\u0026#39;address_with_party\u0026#39;, False): - substitutions[\u0026#39;party_name\u0026#39;] = self.party.full_name for key, value in substitutions.items(): substitutions[key.upper()] = value.upper() return substitutions The report generation is broken for parties when certain countries are used in the address record. The following patch fixes the issue for Tryton 4.2 (See: issue6111):\ndiff -r 706751992f88 address.py --- a/address.py Mon Nov 28 16:19:18 2016 +0100 +++ b/address.py Thu Dec 15 13:08:35 2016 +0100 @@ -138,6 +138,11 @@ \u0026#39;country\u0026#39;: self.country.name if self.country else \u0026#39;\u0026#39;, \u0026#39;country_code\u0026#39;: self.country.code if self.country else \u0026#39;\u0026#39;, } + + # Map invalid substitutions district* to subdivision* on 4.2. + substitutions[\u0026#39;district\u0026#39;] = substitutions[\u0026#39;subdivision\u0026#39;] + substitutions[\u0026#39;district_code\u0026#39;] = substitutions[\u0026#39;subdivision_code\u0026#39;] + if context.get(\u0026#39;address_from_country\u0026#39;) == self.country: substitutions[\u0026#39;country\u0026#39;] = \u0026#39;\u0026#39; if context.get(\u0026#39;address_with_party\u0026#39;, False): Module: Stock Previous versions used \u0026lt;product_name(move.product.id, shipment.delivery_address.party.lang and shipment.delivery_address.party.lang.code or 'en_US')\u0026gt; in the report. This was changed to move.product.rec_name in the upstream delivery note report. Adapt accordingly in case you are using a custom delivery note report. The upgrade To get started, clone the running instance and test the upgrade with the clone. Our Tryton instance is installed in a virtual environment, so I started out by updating the version information in our Ansible role from 4.0 to 4.2. After that, Ansible can create the new virtual environment. Beware to upgrade all custom modules before this step as they need to depend on the new Tryton version.\nRun pip freeze | grep \u0026quot;4.0\u0026quot; in the virtual environment to make sure no modules from 4.0 are lingering around.\nFix translations In order to keep custom translations, one needs to convert them before performing the database upgrade. Connect to the database and update all translations:\nUPDATE ir_translation SET lang = \u0026#39;en\u0026#39; WHERE lang = \u0026#39;en_US\u0026#39;; UPDATE ir_translation SET lang = \u0026#39;de\u0026#39; WHERE lang = \u0026#39;de_DE\u0026#39;; Upgrading the database After the translation updates we are ready to upgrade the database:\n$ trytond-admin --verbose --config trytond.conf --database \u0026lt;dbname\u0026gt; --all The database upgrade completed successfully.\nThe aftermath A few problems popped up after the upgrade:\nDuplicating a product requires \u0026ldquo;administration\u0026rdquo; permission (See: issue6115) Menus are always displayed in English in the Windows client (See: issue6116) The URL parameter does not open the referenced record (See: issue6119) Additional information https://discuss.tryton.org/t/migration-from-4-0-to-4-2/161 Until next time.\n","permalink":"https://nblock.org/2016/12/15/notes-on-upgrading-from-tryton-4.0-to-tryton-4.2/","summary":"A few notes on the upgrade from Tryton 4.0 to Tryton 4.2.","title":"Notes on upgrading from Tryton 4.0 to Tryton 4.2"},{"content":"One way of storing secrets within Ansible is to use the built-in Vault and the respective command-line tool ansible-vault. A common use case is to have a key file available locally (a file containing the secret key information) and to use ansible-vault to encrypt/decrypt files as needed. The documentation on Ansible Vault should get you started.\nLet\u0026rsquo;s assume that there is an encrypted file in group_vars/mygroup/vault.yml. In order change the content of the file, one has to run:\n$ ansible-vault edit group_vars/mygroup/vault.yml $ # Your EDITOR of choice is spawned The file gets decrypted and a fresh instance of your EDITOR of choice is loaded. On exit, the content of the buffer gets encrypted and saved back to the file.\nThere are some issues with this model:\nA fresh instance of Vim is spawn with every change Transparent editing is not possible No use of nice editing features such as diffing with Fugitive The interruption of the current workflow. For example, I need to background Vim or spawn a new shell, edit the encrypted file and get back to my previous Vim session. One solution is to put the following snippet of Vim autocommands into your ~/.vimrc to handle Ansible Vault files transparently:\naugroup ansible-vault autocmd! autocmd BufReadPre,FileReadPre vault.yml setlocal viminfo= autocmd BufReadPre,FileReadPre vault.yml setlocal noswapfile noundofile nobackup autocmd BufReadPost,FileReadPost vault.yml silent %!ansible-vault decrypt autocmd BufWritePre,FileWritePre vault.yml silent %!ansible-vault encrypt autocmd BufWritePost,FileWritePost vault.yml silent undo augroup END The snippet above creates a new autocommand group named ansible-vault and resets any existing autocommands within this group. Before loading, a few settings are adjusted to avoid the leakage of secret data. After the file is loaded, it gets decrypted on the fly and the buffer content is replaced with the clear text representation of the encrypted file. The content of the buffer is encrypted before it is written back to disk. After writing, the undo command is executed once to keep the clear text representation in the buffer in case further edits are needed.\nNote: This autocommand group only kicks in for files named vault.yml (that\u0026rsquo;s how I name those files). If you have a different naming schema, you need to adjust this pattern to suit your needs.\nHint: Take a look at this Stack Overflow answer, if you want use git diff for Vault files.\nHappy Vimming!\n","permalink":"https://nblock.org/2016/10/31/transparently-edit-ansible-vault-files-with-vim/","summary":"A few Vim autocommands to transparently edit Ansible Vault files with Vim.","title":"Transparently edit Ansible Vault files with Vim"},{"content":"WARNING: The steps in this blog post deliberately modify the history of a Git repository and even get rid of some information in the repository. If you are fortunate enough to have a backup somewhere refer to the excellent write-up from Linus Torvalds on how to recover broken blob objects.\nPreface A colleague approached me and said:\nI have this weird Git repository which I can commit to and work with it, but I can\u0026rsquo;t push it to a remote Git server. The Git tools complain about damaged objects in the repository.\nAfter a brief discussion, it turned out that this is the only available copy of the repository. There are no off-site backups and no remote repositories available. Furthermore, about a year ago, the hard drive where this Git repository used to reside crashed and a lot of data was lost.\nThis might get interesting.\nInvestigating the current state The colleague gave me a copy of the Git repository and I started the investigation. Git allows one to check the current state of the repository using git fsck:\n$ cd working $ git fsck error: inflate: data stream error (invalid distance too far back) error: sha1 mismatch 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d error: 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d: object corrupt or missing Checking object directories: 100% (256/256), done. missing blob 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d ... Ouch. The man page of git gc states the following about sha1 mismatches:\nThe database has an object who’s sha1 doesn\u0026rsquo;t match the database value. This indicates a serious data integrity problem.\nFix (most of) the repository Let\u0026rsquo;s move the broken file somewhere else and run git fsck again:\n$ mv .git/objects/25/c49e20b0c3eca36713a9cb7a21b25a172f7b0d /tmp $ git fsck Checking object directories: 100% (256/256), done. broken link from tree ef557adecb5ed7e114d93a7a9a82cbf4b0cd30f1 to blob 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d missing blob 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d Now we know which tree object is affected. Fortunately, the damage seems fairly limited as only one tree object refers to the broken blob object. We only know the hash of the broken blob object but do not know the actual filename. The tree object hash may be used to find out to what file in the repository the broken blob object refers to:\n$ git ls-tree ef557adecb5ed7e114d93a7a9a82cbf4b0cd30f1 | \\ grep 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d 100644 blob 25c49e20b0c3eca36713a9cb7a21b25a172f7b0d main.c OK, main.c is broken, but which commit points to the tree object? git log to the rescue:\n$ git log --pretty=format:\u0026#34;%T %H\u0026#34; | grep ef557adecb5ed7e114d93a7a9a82cbf4b0cd30f1 ef557adecb5ed7e114d93a7a9a82cbf4b0cd30f1 e0a7722fd9aa94f632ad6427d62189a3ae2b8de5 The 2nd column is the hash of the faulty commit: e0a7722fd9aa94f632ad6427d62189a3ae2b8de5. This commit is more than a year old. It really seems that the blob object got damaged during the hard drive crash. Let\u0026rsquo;s find the commit before and after the faulty commit and take a look at the differences between them:\n$ git log --pretty=format:\u0026#34;%H - %h - %s\u0026#34; ... 85263b18e04e7dc0473f4c4501d366d389bf6e01 - 85263b1 - [redacted] b87625ab801db7d3452746ca8cc2a1f4137ed924 - b87625a - [commit after, redacted] e0a7722fd9aa94f632ad6427d62189a3ae2b8de5 - e0a7722 - [faulty commit, redacted] 4fb7737350cc0b646177cb0a041fc73422ffc98a - 4fb7737 - [commit before, redacted] 75199fa2dab382cbf3395be0a696e72e884163b0 - 75199fa - [redacted] ... $ git diff --shortstat 4fb7737..b87625a 9 files changed, 1444 insertions(+), 1056 deletions(-) Oh, that\u0026rsquo;s a lot! How about the diff for main.c only?\n$ git diff --shortstat 4fb7737..b87625a -- main.c 1 file changed, 977 insertions(+), 857 deletions(-) This does not look any better. As there is no other backup of main.c from this particular point in time, we might just kick out the faulty commit entirely. After all, having a commit where an important file is missing is not helpful either. Even worse, the project can\u0026rsquo;t be built from this commit at all. In this particular situation, we can get rid of the faulty commit and \u0026ldquo;fix\u0026rdquo; the repository that way.\nOne might think of interactive rebasing in this situation, but it turns out that it does not work due to the broken object database. Another way to solve this problem is to use graft points (grafts). The description from the Git wiki sounds promising:\nIt works by letting users record fake ancestry information for commits. This way you can make Git pretend the set of parents a commit has is different from what was recorded when the commit was created.\nCurrently, the Git repository looks like this:\nx --- 75199fa --- 4fb7737 --- e0a7722 --- b87625a --- 85263b1 --- x [before] [faulty] [after] And it should look like this:\n/------------------\\ / \\ x --- 75199fa --- 4fb7737 e0a7722 b87625a --- 85263b1 --- x [before] [faulty] [after] Let\u0026rsquo;s assign the commit b87625a its new parent commit 4fb7737 using the grafts file:\n$ mkdir .git/info $ echo \u0026#34;b87625ab801db7d3452746ca8cc2a1f4137ed924 4fb7737350cc0b646177cb0a041fc73422ffc98a\u0026#34; \\ \u0026gt; .git/info/grafts Apply the changes permanently:\n$ git filter-branch -- --all This Git repository is now somewhat dirty and we would like to have a clean copy of it. The easiest way to accomplish this is to simply clone the repository locally (see man page of git filter-branch):\n$ cd .. $ git clone working clean $ Cloning into \u0026#39;clean\u0026#39;... $ done. Looks good, but what about git fsck and the history?\n$ cd clean $ git fsck $ Checking object directories: 100% (256/256), done. $ git log --pretty=format:\u0026#34;%H - %h - %s\u0026#34; ... 161ef4045da4cc6750599a11447380a76ca017b6 - 161ef40 - [redacted (new hash)] c56d22e9da16aa59fa5fb47d11e4c2c930d1b583 - c56d22e - [commit after, redacted (new hash)] 4fb7737350cc0b646177cb0a041fc73422ffc98a - 4fb7737 - [commit before, redacted] 75199fa2dab382cbf3395be0a696e72e884163b0 - 75199fa - [redacted] ... Done, now the Git object database is in a consistent state again and it is possible to push it to a remote Git server.\nAdditional information https://git.kernel.org/cgit/git/git.git/tree/Documentation/howto/recover-corrupted-blob-object.txt?id=HEAD https://git-scm.com/book/en/v2 Various Git man pages Credits Lukas for reviewing this blog post and his valuable input. Go and create off-site backups!\n","permalink":"https://nblock.org/2016/10/24/recover-most-of-a-broken-git-repository/","summary":"Repair as much as possible from a Git repository with broken blob objects and no backups available.","title":"Recover most of a broken Git repository"},{"content":"At work, we wanted to upgrade the Tryton installation from version 3.8 to version 4.0. There is very little information available on how to conduct such upgrades and how to deal with the various errors that pop up during the process. The post starts with notes on some of the changes followed by a description of problems as they emerged during the upgrade and concludes with some required changes after the upgrade.\nBefore the upgrade Split of the trytond binary The trytond executable has been split into multiple distinct executables: trytond-admin, trytond-cron. There is no backwards compatibility layer in the new trytond, so you will quickly notice which scripts need to change as some parameters are no longer accepted by trytond:\nAll administrative tasks such as database upgrades are now handled by trytond-admin. All background tasks are now handled by trytond-cron and the new trytond no longer accepts the --cron parameter. As such, a new service file is needed to handle background tasks. Configuration changes The configuration for trytond needs to be adjusted:\nThe [jsonrpc] and [xmlrpc] sections have been merged into [web]. The path to the Tryton web interface (SAO) now uses the configuration key root instead of data. The configuration key is part of the [web] section. All connections are now handled by a single port. The separation between JSON-RPC and XML-RPC is gone. This obviously triggers some changes in the nginx configuration as well. Custom modules We use several modules that provide custom reports which require some updates:\nUpdate dependency to Tryton 4.0. Rebuild translations as the translation format changed. I actually deleted the translations of the modules exported them again from the test instance. Tools and scripts The tools that leverage Proteus to communicate with Tryton require some changes as well:\nUpdate Proteus to match Tryton 4.0. The XML-RPC connection string must end with a /, e.g. https://user:pass@hostname:port/database/ The upgrade Update Trytond and its modules To get started, clone the running instance and test the upgrade with the clone. Our Tryton instance is installed in a virtual environment, so I started out by updating the version information in our Ansible role from 3.8 to 4.0 and completed the list of required modules from the official documentation. After that Ansible can create the new virtual environment. Beware to upgrade all custom modules before this step as they need to depend on the new Tryton version.\nRun pip freeze | grep \u0026quot;3.8\u0026quot; in the virtual environment to make sure no modules from 3.8 are lingering around.\nUpgrading the database OK, the virtual environment is ready. Let\u0026rsquo;s upgrade the database:\n$ trytond-admin --verbose --config trytond.conf --database \u0026lt;dbname\u0026gt; --all Running the above commands yields:\nTraceback (most recent call last): File \u0026#34;/path/to/tryton/venv/bin/trytond-admin\u0026#34;, line 21, in \u0026lt;module\u0026gt; admin.run(options) File \u0026#34;/path/to/tryton/venv/local/lib/python2.7/site-packages/trytond/admin.py\u0026#34;, line 48, in run Pool(db_name).init(update=options.update, lang=lang) File \u0026#34;/path/to/tryton/venv/local/lib/python2.7/site-packages/trytond/pool.py\u0026#34;, line 155, in init lang=lang) File \u0026#34;/path/to/tryton/venv/local/lib/python2.7/site-packages/trytond/modules/__init__.py\u0026#34;, \\ line 429, in load_modules _load_modules() File \u0026#34;/path/to/tryton/venv/local/lib/python2.7/site-packages/trytond/modules/__init__.py\u0026#34;, \\ line 396, in _load_modules graph = create_graph(module_list)[0] File \u0026#34;/path/to/tryton/venv/local/lib/python2.7/site-packages/trytond/modules/__init__.py\u0026#34;, line 191, in create_graph - set((p[0] for p in packages)))) Exception: Missing dependencies: [u\u0026#39;purchase_request\u0026#39;] This error occurs even if the module trytond-purchase-request is installed. The solution is to run the following command instead (with this exact argument order):\n$ trytond-admin --verbose --config trytond.conf --database \u0026lt;dbname\u0026gt; -u purchase_request --all Subsequent invocations do not require the -u purchase_request parameter.\nNow, the upgrade completes but spits out the following warning several times:\nWARNING trytond.modules.product.product The column \u0026#34;category\u0026#34; on table \\ \u0026#34;product_template\u0026#34; must be dropped manually The product categories got overhauled with Tryton 4.0. Tryton already migrated the respective data in the database and now warns us that we should better get rid of the old category column. This warning may be fixed by connecting to the database and do what Tryton suggests:\nALTER TABLE product_template DROP COLUMN category; The database upgrade is now complete. Subsequent invocations do not yield any errors or warnings.\nRemove the webdav module The webdav module is no longer bundled as core module for Tryton 4.0. It is now available as separate module. We never used it, so it is best to remove it altogether. Connect to the database and remove it from the ir_module table (thx @cedk for the hint):\nDELETE FROM ir_module WHERE name=\u0026#39;webdav\u0026#39;; After the upgrade Usability improvements Tryton 4.0 allows to set default values for several fields. This is really useful as it saves the user a few clicks for each created product and improves data consistency.\nProduct categories The product category was replaced by a categories field. This allows multiple categories per product. A single category is used for accounting.\nFor our use case we had to update the product categories and the accounting category for all products. I wrote a small migration script that leverages Proteus to update all products (download).\nAdditional information https://discuss.tryton.org/t/migration-from-3-8-to-4-0/96 Until next time.\n","permalink":"https://nblock.org/2016/08/19/notes-on-upgrading-from-tryton-3.8-to-tryton-4.0/","summary":"Some notes on the obstacles I encountered during the upgrade from Tryton 3.8 to Tryton 4.0.","title":"Notes on upgrading from Tryton 3.8 to Tryton 4.0"},{"content":"This post describes how to install a self-signed certificate both system-wide and locally in a Python virtualenv. This is nothing fancy but I regularly need this and its best to write it down once and for all. As a consequence, I decided to remove all the --no-verify-ssl/--skip-ssl-verification/--insecure options in my tools. Certificate verification is there for a reason, use it.\nGet the certificate from the server:\n$ echo | openssl s_client -connect HOST:PORT 2\u0026gt;/dev/null | openssl x509 -out HOST.crt -text Take a close look at the output from the above command.\nSystem-wide installation For Debian based systems:\n$ sudo mv HOST.crt /usr/local/share/ca-certificates $ sudo update-ca-certificates See man(8) update-ca-certificates for details.\nFor Arch Linux:\n$ sudo mv HOST.crt /etc/ca-certificates/trust-source/anchors/ $ sudo update-ca-trust extract See man(8) update-ca-trust for details.\nFor a virtualenv Most tools and libraries inside a virtualenv will happily ignore the system-wide certificate bundle. Requests for example ships its own cacert.pem file. Fortunately, requests accepts the environment variable REQUESTS_CA_BUNDLE which may point to user-defined CRT file. Simply use the following command as a one-time setup step:\n$ export REQUESTS_CA_BUNDLE=/path/to/HOST.crt Please do not replace the bundled cacert.pem file with your custom version since it will be overwritten upon updates.\nIn case the library is using httplib under the hood (such as proteus), one can use the environment variable SSL_CERT_FILE to point to the user defined CRT file:\n$ export SSL_CERT_FILE=/path/to/HOST.crt ","permalink":"https://nblock.org/2016/07/28/using-self-signed-certificates/","summary":"How to install self-signed certificates both system-wide and for a virtualenv.","title":"Using self-signed certificates"},{"content":"With the release of Jenkins 2.0, the project moved to a new domain: Jenkins.io. Unfortunately, the Debian repository was moved as well and broke our unattended-upgrades configuration for Jenkins. Luckily, this is rather easy to debug using (unattended-upgrade --debug). The command prints all the required details to fix the unattended-upgrades configuration.\nThe following unattended-upgrade configuration works with Jenkins 2.x:\nUnattended-Upgrade::Origins-Pattern { \u0026#34;origin=jenkins.io,suite=binary\u0026#34;; }; Automate all the things!\n","permalink":"https://nblock.org/2016/07/18/unattended-upgrades-for-jenkins/","summary":"Configuring unattended-upgrades for Jenkins 2.x","title":"Unattended-upgrades for Jenkins 2.x"},{"content":"Once in a while I need to transfer files from host A to host C without having a direct connection between them. However, host B can connect to both of them:\nA \u0026lt;----\u0026gt; B \u0026lt;----\u0026gt; C A typical use case is to copy files from a server to an isolated virtual machine.\nUsing traditional netcat The following commands work with the traditional netcat (netcat-traditional on Debian Jessie). In order to automatically close the connection one needs to specify the -q parameter with an appropriate timeout on host A. Otherwise the connection is kept open until it is closed manually (CTRL-C).\nHost A: $ tar -cJf - directory | nc -q 0 B 8000 Host B: $ nc -l -p 8000 | nc C 9000 Host C: $ nc -l -p 9000 | tar -xJf - Using OpenBSD netcat In case the OpenBSD version of netcat is available (netcat-openbsd on Debian Jessie) one might use the following commands instead.\nHost A: $ tar -cJf - directory | nc B 8000 Host B: $ nc -l 8000 | nc C 9000 Host C: $ nc -l 9000 | tar -xJf - Using SSH One can also establish SSH port forwarding to secure the transfer between host A and host C:\nHost A: $ tar -cJf - directory | nc localhost 10000 Host B (1): $ ssh -N -R 10000:localhost:10000 user@A Host B (2): $ ssh -N -L 10000:localhost:10000 user@C Host C: $ nc -l 10000 | tar -vxJf - Using SCP If the relevant files are accessible from the user that is used to establish the SSH connection you might also use scp with the -3 option. On host B: $ scp -r -3 user@A:/path/to/directory user@C:/path/to/destination.\nThere are more ways to transfer files using an intermediate. Drop me a line, if you know about a particular neat one.\n","permalink":"https://nblock.org/2016/06/30/transfer-files-between-two-hosts-via-an-intermediate-host/","summary":"A few option to copy files between two hosts using an intermediate host","title":"Transfer files between two hosts via an intermediate host"},{"content":"A few days ago Nextcloud was officially announced and the Nextcloud team crafted a new release rather quickly.\nThe following steps were required to migrate from ownCloud 8.2.5 to Nextcloud 9.0.50:\nCreate a backup of the current ownCloud installation. Install Nextcloud by following this migration guide. Perform the actual upgrade for all apps: $ php occ upgrade --no-app-disable. For each third party app: disable it in the WebUI, reload, enable it in the WebUI. Recover missing files: $ php occ files:scan --all. Check if the calendar/contacts migration worked as expected. Update the Nginx configuration. Remove the old ownCloud installation. For each mobile phone: replace the ownCloud app with the Nextcloud app. Have fun.\n","permalink":"https://nblock.org/2016/06/18/migrating-to-nextcloud-9/","summary":"How to migrate from ownCloud 8.2.5 to Nextcloud 9.0.50","title":"Migrating to Nextcloud 9.0"},{"content":"At work, we are currently migrating away from a proprietary ERP software package to Tryton. There are several reasons why we decided to switch and one them is that Tryton provides an API and sane data access. Being a free software project, there are libraries and tools available to get access to the data. This is a good foundation to build your own tools on top.\nTryton provides a client library called Proteus for programmatic data access. The first tool that I built on top of Proteus is called pedantic bot. It nags about inconsistencies and errors in the database. For example, the pedantic bot checks for the following issues:\nLeading/trailing whitespace in fields Control characters in fields Inconsistent formatting of fields (phone numbers, …) Incomplete data records (e.g. contact information is missing) Missing descriptions … Tryton allows to translate some of the fields where it makes sense to have them available in multiple languages (product name, a description, payment terms, …). The pedantic bot should check all the translations alike. It took me a while to figure out how to get access to a field in a particular translation, so here is a short demo on how to accomplish it in Tryton 3.8. It should work on other versions too:\n#!/usr/bin/python2 from proteus import config from proteus import Model def main(username, password, host, port, db): # Connect to Tryton. current_config = config.set_xmlrpc( \u0026#39;https://{}:{}@{}:{}/{}\u0026#39;.format(username, password, host, port, db)) # Get the product model. Product = Model.get(\u0026#39;product.product\u0026#39;) # Print the product information in de_DE. with current_config.set_context({\u0026#39;language\u0026#39;: \u0026#39;de_DE\u0026#39;}): for record in Product.find(): print(u\u0026#39;Name: {}\u0026#39;.format(record.rec_name)) print(u\u0026#39;Description: {}\u0026#39;.format(record.description)) # Print the product information in en_US. with current_config.set_context({\u0026#39;language\u0026#39;: \u0026#39;en_US\u0026#39;}): for record in Product.find(): print(u\u0026#39;Name: {}\u0026#39;.format(record.rec_name)) print(u\u0026#39;Description: {}\u0026#39;.format(record.description)) # Print the product information in the language that is configured for the # connected user. for record in Product.find(): print(u\u0026#39;Name: {}\u0026#39;.format(record.rec_name)) print(u\u0026#39;Description: {}\u0026#39;.format(record.description)) if __name__ == \u0026#39;__main__\u0026#39;: main(\u0026#39;username\u0026#39;, \u0026#39;password\u0026#39;, \u0026#39;hostname\u0026#39;, \u0026#39;9000\u0026#39;, \u0026#39;db\u0026#39;) Save this script as example.py, install Proteus and run:\n$ python example.py The solution is to set the desired language in the context: with current_config.set_context({'language': 'en_US'}). Have fun.\n","permalink":"https://nblock.org/2016/06/03/access-multiple-translations-in-tryton/","summary":"A short blog post on how to access multiple translations of a field in the Tryton ERP.","title":"Access multiple translations of a field in Tryton"},{"content":"One of our customers is currently using Perforce as their version control system. Some parts of their software is already in Git, but two huge repositories are still maintained in Perforce. Years ago, they decided that they want to move to Git and weed out Perforce altogether. Since then their Perforce server is basically unmaintained and the maintenance contract with Perforce has expired years ago. It\u0026rsquo;s in very bad shape and the limitations of Perforce are daunting.\nNow, after several years of basically leaving everything as is, the customer decided to tackle this issue again and finally get rid of Perforce. A simplified version of the transition plan looks as follows:\nBring the internal git server (Gitlab) up-to-date. Create a one-way bridge between Perforce and Git to periodically sync new changes from Perforce to Git. Move the build system and infrastructure to Git. Educate developers to use Git (for those that don\u0026rsquo;t know it). Shutdown the Perforce server permanently. This blog post is about the second step, the one-way bridge between Perforce and Git.\nRequirements Here is an informal list of requirements:\nKeep the entire history of interesting branches. These include: the current master branch, a few long running development branches and many release branches. Preserve branch points, i.e. the point in time where two branches diverged. During the transition period, all new changes committed in Perforce should be periodically synced to Git (incremental updates). Make incremental updates fast. The one-way bridge Here is how one can migrate a repository from Perforce to Git with support for multiple branches, an unimpaired history and support for incremental updates.\nCreate the Git repository Create a new local Git repository and setup a Git remote. For the initial import, the remote is empty and no data can be fetched from it. On subsequent runs, most of the data is already in Git and fetching the data directly from Git is way faster than extracting the commits from Perforce.\nSync Perforce branches into Git Each Perforce branch of interest needs to be checked for updates. Git-p4 provides the sync subcommand which may be used for this purpose. If a copy of the Perforce branch is not yet in Git, import the entire history of it into a dedicated Git branch. If it is already in Git, just update the local Git branch and import all new changes since the last run. I used the prefix p4/ for Git branches that track a Perforce branch, e.g. p4/main tracks the Perforce main branch.\nFind all branches with updates Find all Git branches that were updated since they need further processing. On the first run, all branches need to be updated. On all subsequent runs, the number of branches to update should be fairly small, if any.\nRewrite history to restore branch points After the import, the branches have no relationship with each other and the repository might look as follows:\nJ---K---L---M p4/dev G---H---I p4/release A---B---C---D---E---F p4/main In reality, the branches do relate to each other and the repository should look like this:\nG---H---I p4/release / A---B---C---D---E---F p4/main \\ J---K---L---M p4/dev Unfortunately, the commit-parent relationship got lost during the import. One can use grafts to restore the relationship between commits:\nGraft points or grafts enable two otherwise different lines of development to be joined together. It works by letting users record fake ancestry information for commits. This way you can make git pretend the set of parents a commit has is different from what was recorded when the commit was created.\nSince the commit volume on the Perforce repository is relatively low, obtaining the graft points is straight forward in my case:\nExtract the current graft points from the Git repository. Find the SHA1 of the first commit on each branch and add the SHA1 of the commit that happened right before as a second parent. If no commit is available, do nothing. Store the modified grafts file under .git/info/grafts. Rewrite history using git filter-branch to permanently apply the grafts. Delete the grafts file. From the Perforce branch to the final Git branch Since rewriting history permanently alters the repository, it is a very bad idea to do it on public branches. Furthermore, Git-p4 is not amused if one messes with the history of the p4/ branches. After some experiments, I have decided for the following workflow to migrate a Perforce branch to a final Git branch:\nPerforce depot main + | | | clone ---------------- | sync +-----+ | | | rewrite | | | history v v + Git repository (local) p4/main +-----------\u0026gt; tmp/main +-----------\u0026gt; main branch branch ^ merge ^ | | | | | push | push ---------------- | pull | pull | | | | v v Git repository (remote) p4/main main The above diagram illustrates the workflow for a single branch, main:\nClone or sync the Perforce branch into the local Git branch. Create a temporary branch for each branch that got updated. Those branches have the prefix tmp/, e.g. tmp/main is the temporary branch for p4/main. Create the grafts file and rewrite history for all temporary branches. Create the final branch from the temporary branch, e.g main is the final branch for the Perforce main branch and is branched from tmp/main. On the first run, the final branch does not exist, so simply create it from the temporary branch. If it does exist, perform a fast-forward only merge to get new commits from the temporary branch into the final branch. Cleanup and publish Cleanup the local Git repository and remove all temporary branches since they should not be pushed to the remote. After cleanup, publish all branches under p4/ and all final branches to the remote.\nImplementation I wrote a small one-way bridge in Python that implements the above steps. Unfortunately, it is written in Python 2 due to a fairly ancient server infrastructure where Python 3 is not available. You can download the script and a sample configuration from here.\nCredits Lukas for reviewing this blog post and his valuable input on this topic. Until next time.\n","permalink":"https://nblock.org/2015/12/12/migrate-from-perforce-to-git/","summary":"A short guide on how to migrate from Perforce to Git with support for incremental updates and history rewriting.","title":"Migrate from Perforce to Git"},{"content":"From time to time I receive e-mails that have a multipart/alternative MIME type and offer the e-mail body in a text/plain version as well as a text/html version. Unfortunately, the text/plain version is sometimes broken, containing either a plain copy of the text/html body, having no content at all or providing even useful hints such as:\nThis e-mail may only be displayed in HTML.\nAll of those sites happily ignore the meaning of the multipart/alternative MIME type as specified in RFC 2046:\n… In particular, each of the body parts is an \u0026ldquo;alternative\u0026rdquo; version of the same information.\nSystems should recognize that the content of the various parts are interchangeable. Systems should choose the \u0026ldquo;best\u0026rdquo; type based on the local environment and references, in some cases even through user interaction. …\nIn mutt, one can quickly display other MIME types by pressing the v key when viewing a message and selecting another MIME entry. Most of the time this is sufficient, but for certain sites where I know in advance that their text/plain version of the e-mail is broken, I want to automatically view the text/html version of the e-mail.\nOne way to achieve this, is by using a message-hook together with the alternative_order muttrc setting. The following alternative_order prefers the text/plain version over the text/html version:\nalternative_order text/plain text/html To automatically select the text/html version for certain e-mails, I use the following message-hooks in my muttrc:\n# The default alternative_order: prefer text/plain over text/html. message-hook \u0026#39;.\u0026#39; \u0026#39;unalternative_order *; alternative_order text/plain text/html\u0026#39; # These sites send a multipart/alternative header while having a broken # text/plain MIME part. Select text/html as preferred MIME part. message-hook \u0026#39;~f agent@willhaben\\.at$\u0026#39; \u0026#39;unalternative_order *; alternative_order text/html\u0026#39; message-hook \u0026#39;~f post\\.at$\u0026#39; \u0026#39;unalternative_order *; alternative_order text/html\u0026#39; The default message-hook (with the . in the pattern field) resets the alternative_order and sets it to prefer text/plain over text/html. Each of the following lines overrides the default alternative_order and prefers the text/html version for sites that send broken text/plain versions.\nHave fun.\n","permalink":"https://nblock.org/2015/12/05/mutt-automatically-display-an-e-mail-as-html/","summary":"In Mutt, show the HTML version of an e-mail for sites that send a broken text representation.","title":"Mutt: automatically display an e-mail as HTML"},{"content":"This is a short blog post on how to substitute text inside a visual selection with Vim. I encourage you to fire up a Vim instance and try it for yourself.\nSuppose you have the following block of text:\nx1 = do_something(\u0026#39;a\u0026#39;, 0x13, 0x02, 0x03, 0x02); x2 = do_something(\u0026#39;b\u0026#39;, 0xab, 0xcd, 0xef, 0x01); x3 = do_something(\u0026#39;c\u0026#39;, 0x15, 0x16, 0x17, 0x18); and you want to refactor it to the following block of text:\nx1 = do_something(\u0026#39;a\u0026#39;, 0x13020302); x2 = do_something(\u0026#39;b\u0026#39;, 0xabcdef01); x3 = do_something(\u0026#39;c\u0026#39;, 0x15161718); Vim allows to substitute text using the s command. The substitute command operates on a range of lines and replaces the search pattern with a string. Vim also offers multiple visual modes (characterwise, linewise and blockwise visual mode) to visually select text and operate on it.\nStarting with:\nx1 = do_something(\u0026#39;a\u0026#39;, 0x13, 0x02, 0x03, 0x02); x2 = do_something(\u0026#39;b\u0026#39;, 0xab, 0xcd, 0xef, 0x01); x3 = do_something(\u0026#39;c\u0026#39;, 0x15, 0x16, 0x17, 0x18); Place the cursor on the first line at the beginning of 0x13 and press CTRL-v to enable visual block mode and visually select all three lines up until the last parameter. All parameters starting with 0x should be selected.\nNow, enter the following substitute command:\n:\u0026lt;,\u0026gt;s/, 0x//g Hit \u0026lt;CR\u0026gt; and notice that Vim produces the following result:\nx1 = do_something(\u0026#39;a\u0026#39;13020302); x2 = do_something(\u0026#39;b\u0026#39;abcdef01); x3 = do_something(\u0026#39;c\u0026#39;15161718); This is not quite the result we expected. Vim\u0026rsquo;s substitute command operates on lines of text and it substituted each occurrence of the pattern on all selected lines. Undo the substitution (by pressing u), reselect the visual selection by pressing gv and use the pattern-atom %V inside the substitute command:\n:\u0026lt;,\u0026gt;s/\\%V, 0x//g Hit \u0026lt;CR\u0026gt; again and there it is:\nx1 = do_something(\u0026#39;a\u0026#39;, 0x13020302); x2 = do_something(\u0026#39;b\u0026#39;, 0xabcdef01); x3 = do_something(\u0026#39;c\u0026#39;, 0x15161718); The pattern-atom %V is used to match inside a visual selection. I encourage you to take a look at its documentation.\n:h %V :h pattern-atoms Obviously, this is just one possible solution to solve this issue and many more exist. Until next time.\n","permalink":"https://nblock.org/2015/11/24/vim-substitute-inside-a-visual-selection/","summary":"How to substitute inside a visual selection with vim.","title":"Vim: substitute inside a visual selection"},{"content":"I use mutt as mail client at home and at work and I\u0026rsquo;m quite happy with it. One thing that bugs me though is the built-in alias support. Its email address completion is rather limited. Luckily, mutt supports QueryCommand that can be used to connect an arbitrary data source for external email address completion. I use a tiny wrapper for grep to search multiple alias files at once:\n#!/bin/sh # Search all alias files and sort by name. echo \u0026#34;Search results for »$1«\u0026#34; grep --ignore-case --no-filename \u0026#34;$1\u0026#34; $HOME/.mutt/aliases/*.aliases | sort --key=2,2 The query_command option in mutt must be set accordingly to use the script:\nset query_command=\u0026#34;~/.mutt/scripts/alias-query \u0026#39;%s\u0026#39;\u0026#34; Mutt and the above script expect an alias file to be in the mutt query format. This format is rather simple and looks as follows (described here):\n\u0026lt;email address\u0026gt; \u0026lt;tab\u0026gt; \u0026lt;long name\u0026gt; \u0026lt;tab\u0026gt; \u0026lt;other info\u0026gt; \u0026lt;newline\u0026gt; Most of my alias files are autogenerated using various scripts. One of those scripts connects to an LDAP server, finds the name and the email address of all company employees and converts them to the mutt query format described above (tested on Exchange):\n#!/bin/sh # Collect data from an LDAP server and convert to mutt query format. # NOTE: This script interactively asks for the LDAP password. set -e set -u # Configuration LDAP_HOST=\u0026#34;192.168.1.1\u0026#34; LDAP_USER=\u0026#34;DOMAIN\\\\USERNAME\u0026#34; LDAP_BASE=\u0026#34;OU=SBSUsers,OU=Users,OU=MyBusiness,DC=DOMAIN,DC=TLD\u0026#34; MUTT_INFO=\u0026#34;TheOtherInfoField\u0026#34; ldapsearch -LLL -h \u0026#34;$LDAP_HOST\u0026#34; -D \u0026#34;$LDAP_USER\u0026#34; -W \\ -x -b \u0026#34;$LDAP_BASE\u0026#34; \u0026#34;(mail=*)\u0026#34; cn mail | \\ sed -n \u0026#34;/^cn:/ {N; s/^cn: \\(.*\\)\\nmail: \\(.*\\)$/\\2\\t\\1\\t$MUTT_INFO/p}\u0026#34; Have fun.\n","permalink":"https://nblock.org/2015/10/24/using-mutt-alias-files-for-email-address-completion/","summary":"Convert data from an LDAP server to Mutt query format and use it for email address completion.","title":"Using mutt alias files for email address completion"},{"content":"In this blog post, I\u0026rsquo;m going to describe the ideas and my solution for subscribing to public calendars and accessing them across all my devices (laptop, mobile phone, web UI). One might think that this is a no-brainer since there are tons of commercial providers like Google/Apple/… out there that can be used to tackle this issue. This is true, but I do not want to rely on commercial providers and their services to handle my private data.\nIdea One: Radicale and InfCloud After a quick internet search, Radicale and InfCloud seem like the ideal solution. Radicale is a rather small piece of Python software which is quite popular in the community. It provides fine grained access control, can keep calendar entries in a Git repository and focuses on CalDAV and CardDAV. Since it has no web UI, InfCloud comes in handy. It is an open source CalDAV/CardDAV web client and the interface looks good enough for me.\nAll tests worked out well until I hit issue 249 during my tests with DAVdroid. As of now, there is no solution to this issue and not having the public calendars on my mobile phone is a deal breaker for me.\nIdea Two: Owncloud and CalendarPlus Since I already have an Owncloud instance up-and running, I thought about re-using that. The standard OwnCloud calendar app (as of 8.1) does not support subscription of public calendars and has a rather limited UI. There is an alternative available: CalendarPlus for Owncloud 8.1, but it did not work as expected during my tests. The subscription of public Google calendars did not work for me. It can\u0026rsquo;t re-use the data from the existing calendar app and after all, it is a rather fresh piece of software. Maybe CalendarPlus will be an alternative down the road, but for now, I\u0026rsquo;m going to leave it aside.\nIdea Three: Owncloud and vdirsyncer Vdirsyncer may be used to synchronize calendars and address books between different storages. Typical storages are http, filesystem, CalDAV, CardDAV, …. It has proper documentation and is well-maintained. With vdirsyncer, the missing subscription feature of the OwnCloud calendar app is no longer an issue.\nDuring my tests, I hit two issues which were both fixed quickly by its author, Markus Unterwaditzer:\nOne of the calendars was using strange UIDs for events causing vdirsyncer to crash. Every time one fetches a Google public calendar, the DTSTAMP field of each calendar entry is set to a current timestamp. This caused vdirsyncer to recognize all events as modified and trigger a synchronization of the entire calendar. With each invocation of vdirsyncer, all public Google calendars were re-synchronized. One remaining issue is that all synchronized calendars are writable for the OwnCloud user on all attached devices. But there is a workaround available: use a separate user that owns the calendar and share them as read-only to other users and groups.\nFinal solution This is a quick how to of the solution I\u0026rsquo;m currently using:\nCreate a separate OwnCloud user that owns all public calendars. In this example: syncuser Login as syncuser and create an OwnCloud calendar for each public calendar you want to synchronize. For example, I have a calendar called VALUG that maps to the public calendar on valug.at. Share the calendar as read-only for all required OwnCloud users or groups. Configure vdirsyncer in ~/.config/vdirsyncer/config Check if the synchronization works: vdirsyncer sync Create a cron job to run the synchronization periodically. Here is my (shortened) vdirsyncer configuration:\n[general] status_path = ~/.cache/vdirsyncer/status # Pairs [pair valug] a = valug_upstream b = valug_owncloud conflict_resolution = a wins # Storage entries [storage valug_upstream] type = http url = \u0026#34;http://valug.at/calendar/valug.ics\u0026#34; [storage valug_owncloud] type = \u0026#34;caldav\u0026#34; url = \u0026#34;https://owncloud.example.org/remote.php/caldav/calendars/syncuser/valug\u0026#34; username = \u0026#34;syncuser\u0026#34; verify_fingerprint = \u0026#34;AA:BB:CC:...\u0026#34; Until next time.\n","permalink":"https://nblock.org/2015/07/14/subscribe-to-public-calendars-with-owncloud/","summary":"How to subscribe to public calendars with OwnCloud and vdirsyncer.","title":"Subscribe to public calendars with OwnCloud"},{"content":"A while ago, I started to investigate JasperReports for two projects, a project at work and a private side project. The main reason, why I choose JasperReports instead of any other reporting engine out there is JasperStarter, a project that drives JasperReports from the commandline.\nThis blogpost describes how to process JasperReports containing subreports with JasperStarter and a XML datasource. In JAS-84 the creator of JasperStarter kindly asked for a detailed write-up and an example.\nThese are the tools I used for this blogpost:\nJaspersoft Studio (version 6.0.4) JasperStarter (version 3.0.0) If you are an Arch Linux user, you can use my AUR package for JasperStarter.\nExpected result The screenshot illustrates the expected result:\nIt is not very sophisticated, but the header line and the contact details stem from two distinct subreports that are glued together in the main report.\nXML datasource The following XML file is used as XML datasource to fill the report:\n\u0026lt;contacts\u0026gt; \u0026lt;summary\u0026gt; \u0026lt;important\u0026gt;An important notice\u0026lt;/important\u0026gt; \u0026lt;/summary\u0026gt; \u0026lt;addressbook\u0026gt; \u0026lt;person\u0026gt; \u0026lt;name\u0026gt;ETHAN\u0026lt;/name\u0026gt; \u0026lt;phone\u0026gt;+1 (415) 111-1111\u0026lt;/phone\u0026gt; \u0026lt;/person\u0026gt; \u0026lt;person\u0026gt; \u0026lt;name\u0026gt;CALEB\u0026lt;/name\u0026gt; \u0026lt;phone\u0026gt;+1 (415) 222-2222\u0026lt;/phone\u0026gt; \u0026lt;/person\u0026gt; \u0026lt;person\u0026gt; \u0026lt;name\u0026gt;WILLIAM\u0026lt;/name\u0026gt; \u0026lt;phone\u0026gt;+1 (415) 333-3333\u0026lt;/phone\u0026gt; \u0026lt;/person\u0026gt; \u0026lt;/addressbook\u0026gt; \u0026lt;/contacts\u0026gt; Creating the report The report consists of three different jrxml files:\nThe main report (main.jrxml), which just includes the other two subreports. The subreport for the \u0026ldquo;Page Header\u0026rdquo; band (header.jrxml). It uses the XPath expression: /contacts/summary The subreport for the \u0026ldquo;Details\u0026rdquo; band: (details.jrxml). It uses the XPath expression: /contacts/addressbook/person I won\u0026rsquo;t go into details how to create the above reports with Jaspersoft Studio. There are a lot of tutorials out there, how to get started with JasperReports and Jaspersoft Studio. The important part for this blogpost is the subreport configuration in the main report.\nWhen you include the header subreport, use the following settings:\nExpression: header.jasper Data Source Expression: ((net.sf.jasperreports.engine.data.JRXmlDataSource)$P{REPORT_DATA_SOURCE}).dataSource(\u0026quot;/contacts/summary\u0026quot;) For the details subreport, use:\nExpression: details.jasper Data Source Expression: ((net.sf.jasperreports.engine.data.JRXmlDataSource)$P{REPORT_DATA_SOURCE}).dataSource(\u0026quot;/contacts/addressbook/person\u0026quot;) Jaspersoft Studio might complain about an invalid data source expression, ignore it. I was not able to fix this error. If you have a solution to this problem, feel free to drop me a line.\nCompiling the reports to jasper using JasperStarter All .jrxml files should be compiled to .jasper files before creating the desired report format:\njasperstarter compile header.jrxml jasperstarter compile details.jrxml jasperstarter compile main.jrxml Creating the final report The .jasper files can now be used to create a report in the desired output format:\njasperstarter process -f pdf -t xml --data-file contacts.xml --xml-xpath=\u0026#34;/\u0026#34; main The main report does not use any datasource, so any XPath expression may be used above.\nDownload the example The example from this blogpost is available as zip or tar.gz archive.\nCredits Volker Voßkämper for creating JasperStarter Wolfgang Silbermayr Update (2015-06-03) Use the compile command of JasperStarter. ","permalink":"https://nblock.org/2015/06/02/processing-jasper-subreports-with-jasperstarter/","summary":"Use JasperStarter to process JasperReports containing subreports with xml as datasource.","title":"Processing jasper subreports with JasperStarter"},{"content":"Yesterday, I got an e-mail from a colleague asking me to convert the content of a pdf file back to text. The pdf file had just one huge table with a few columns in it. There are several websites out there that offer this kind of conversion, but using these offers was no option due to confidential data in the pdf file. Here is a screenshot of the pdf file:\nConvert pdf to text pdftotext is quite handy for this task. Together with the option -layout, it tries to keep the visual appearance for the text file, as it was present in the pdf file:\npdftotext -layout input.pdf Cleaning up the text file A quick look at the text file revealed, that there were a lot of bogus empty lines and invalid first and last lines as well. Those issues can easily be fixed with sed:\nsed -i -e \u0026#39;/^$/d\u0026#39; -e \u0026#39;1d\u0026#39; -e \u0026#39;$d\u0026#39; input.txt Importing the text file into LibreOffice LibreOffice Calc may be used to import this text file as table. Select Fixed width as a separator and visually select the column borders.\nThe rows and columns should now match your expectations. One remaining issue is the whitespace in each and every cell. This can be easily fixed with the following search and replace pattern (select regular expressions in the options):\nSearch: [:space:]*(.+)[:space:]* Replace: $1 Now save the file and you\u0026rsquo;re done.\nFeedback? Contact me!\n","permalink":"https://nblock.org/2015/05/09/extracting-tabular-data-from-pdf-files/","summary":"Using pdftotext, sed and LibreOffice to extract tabular data from pdf files.","title":"Extracting tabular data from pdf files"},{"content":"I use flask quite a lot for several projects and over the last months my primary goal was to improve the code coverage with unit tests. There, I came across several occasions where the builtin open() is used in a flask view. A file is opened, its content gets read and the results are passed on to a template. The following minimal example shows a typical usage:\n#!/usr/bin/python3 from flask import Flask app = Flask(\u0026#34;MyFlaskApp\u0026#34;) @app.route(\u0026#34;/\u0026#34;) def index(): try: with open(\u0026#34;/no/such/file/i/guess\u0026#34;, \u0026#39;r\u0026#39;) as f: content = f.read() except FileNotFoundError: content = \u0026#34;No such thing\u0026#34; return content When it comes to unit testing, the mock library is quite useful. As a side note, since Python 3.3, mock is shipped as part of Python\u0026rsquo;s unittest library. It also provides a helper function, mock_open() to easily mock calls to open(). The usual code snippets out there mock calls to open() from (the global) builtins. This causes issues when open() should only be mocked for a particular module and stay untouched for all other modules. The following test piece may be used to mock out the call to open() in the context of the flask module to test. All other calls to open() stay untouched.\n#!/usr/bin/python3 from mymodule import app import unittest from unittest.mock import mock_open, patch class MyModule(unittest.TestCase): def setUp(self): self.app = app.test_client() def test_no_such_thing_on_file_not_found_error(self): m = mock_open() m.side_effect = FileNotFoundError() with patch(\u0026#39;mymodule.open\u0026#39;, m, create=True): rv = self.app.get(\u0026#39;/\u0026#39;) self.assertTrue(\u0026#34;No such thing\u0026#34;, rv.data) m.assert_called_once_with(\u0026#34;/no/such/file/i/guess\u0026#34;, \u0026#34;r\u0026#34;) The important piece of the above snippet is create=True, which is documented as follows:\nBy default patch() will fail to replace attributes that don’t exist. If you pass in create=True, and the attribute doesn’t exist, patch will create the attribute for you when the patched function is called, and delete it again afterwards. This is useful for writing tests against attributes that your production code creates at runtime. It is off by default because it can be dangerous. With it switched on you can write passing tests against APIs that don’t actually exist!\nFeel free to download the example code and run the tests with:\npython3 -m unittest discover Serve the application:\npython3 -c \u0026#39;from mymodule import app; app.run()\u0026#39; Feedback? Contact me!\n","permalink":"https://nblock.org/2015/04/19/mocking-open-in-python3-unit-tests/","summary":"How to mock calls to open() without interfering with the builtin open().","title":"Mocking open() in Python 3 unit tests"},{"content":"From time to time, I participate in PGP/GnuPG key signing parties to strengthen the web of trust. Before joining the last key signing party, the organizer asked to send in the public keys via e-mail. Shortly after sending my public keys, I got a reply from the organizer, stating that the sent public keys are not readable. This happened the second time, so something is broken.\nDetails about the issue Here are the steps to reproduce the problem:\nExport a minimal version of a GnuPG public key:\n$ gpg --export-options export-minimal -a --export \u0026lt;keyid\u0026gt; \u0026gt;pubkey.asc Compose a new e-mail in Mutt.\nAttach the created file: pubkey.asc.\nSend the e-mail.\nThe recipient gets an e-mail with the following attachment:\nVersion: 1 As pointed out by S.N, this seems to be the first multipart of a PGP/MIME message. The second part (containing the content) is missing. See RFC3156 for details.\nWhen attaching the file pubkey.asc, Mutt detects its encoding as application/pgp-encrypted. Mutt even has some special treatment for files of this mime type. It simply replaces its content with the string: Version 1.\nPossible solutions There are multiple possible solutions to this problem:\nManually change the mime type of the attachment to application/pgp-keys. This is error prone and at least I will most likely forget it.\nRemove/comment the relevant line in /etc/mime.types.\n$ grep \u0026#39;application/pgp-encrypted\u0026#39; /etc/mime.types application/pgp-encrypted asc pgp Fix the special treatment for files with mime type application/pgp-encrypted in Mutt.\nAdditional reference and credits S.N for providing great hints, spending a lot of time on the issue and actually finding the solution. Arch Linux bug: https://bugs.archlinux.org/task/43319 Gentoo bug: https://bugs.gentoo.org/show_bug.cgi?id=534658 Mutt bug: http://dev.mutt.org/trac/ticket/3724 Update (2015-01-10) Mutt provides the function attach-key (mapped to \u0026lt;Esc\u0026gt;k by default) for sending a public key. This function sets the mime type properly. Unfortunately, attach-key is broken, when the gpgme backend is used. See http://dev.mutt.org/trac/ticket/3488 for details.\n","permalink":"https://nblock.org/2015/01/04/on-sending-gnupg-publickeys-on-arch-linux-with-mutt/","summary":"What happens when you send public keys with the extension .asc using Mutt on Arch Linux.","title":"On sending GnuPG public keys with Mutt on Arch Linux"},{"content":"About two years ago, I started to work on gcimport, a script that may be used to convert various banking statements to clean and usable CSV files which may be imported in GnuCash. Last week, I discovered ofxstatement, which converts CSV files directly to OFX files. And the best thing about it? It supports plugins! So, goodbye gcimport, hello ofxstatement-austrian!\nInstallation ofxstatement-austrian requires Python 3.2 or later and ofxstatement 0.5.0 or later. You can install it via:\n$ pip install ofxstatement-austrian In case you are using Arch Linux, you might install it from the AUR package.\nUsage First of all, check if the installation worked by issuing ofxstatement list-plugins:\n$ ofxstatement list-plugins The following plugins are available: easybank Easybank (CSV) ing-diba ING-DiBa (CSV) livebank Livebank (CSV) raiffeisen Raiffeisenbank (CSV) If the above command worked, get an export of your banking statements as CSV and convert it to OFX:\n$ ofxstatement convert -t \u0026lt;plugin\u0026gt; statement.csv statement.ofx Finally, import the generated statement.ofx into GnuCash.\nSupported banks Currently, CSV statements from these banks are supported:\nEasybank (giro and credit card) ING-Diba (money market) Livebank (money market) Raiffeisenbank (money market) ofxstatement-austrian is available on GitHub, PyPi and the AUR.\nFeedback? Contact me!\n","permalink":"https://nblock.org/2014/08/25/ofxstatement-austrian/","summary":"ofxstatement-austrian may be used to convert statements of austrian banks to OFX.","title":"ofxstatement-austrian"},{"content":"From time to time it is necessary to write some glue code in order to connect software products with each other. At work, we use GitLab to manage our git repositories and Bugzilla as our bug tracker. One thing that bothered me, was that the information buried in a git commit message does not necessarily make its way to the bug tracker. If one looks at the history of a bug, one might have no clue where the actual fix to a bug is. That was the main reason to create Snolla, a very minimalist project to connect GitLab with Bugzilla. It is written in Python 3 using Flask and python-bugzilla. Since Snolla worked for several months without any problems, I decided to release it as free software (AGPLv3). You can find it on github.\nHow does it work? Snolla extracts some of the information found in GitLab webhooks in order to execute certain tasks on a Bugzilla instance. Currently, it can create comments on a referenced bug. In the default configuration, it searches for one of the following keywords followed by a »#« and a numeric bug id:\ncomment comments mention mentions see seealso For example, the commit message »Fix off-by-one error in foo (see #42).« in the master branch of the project foobar creates the following comment in bug 42 within Bugzilla:\nauthor: John Doe \u0026lt;johndoe@example.com\u0026gt; url: \u0026lt;The url the commit in gitlab\u0026gt; branch: master message: Fix off-by-one error in foo (see #42). Features Restrict to certain branches Snolla can be configured to limit the keyword search to specific branches or group of branches (e.g. »master«, »bugfix/«, …). Easily extendable Snolla has been designed to be easily extendable. One may add support for new frontends (GitHub, BitBucket, …), new backends (Redmine, Jira, …) or new tasks (close a bug, …). Highly customizable Snolla ships a fairly well documented example configuration. Just copy the configuration to /etc/snolla.conf and adjust it to suit your needs. Free software Snolla is licensed under the AGPLv3 license. Requirements In order to use Snolla, you need to:\nAdd a webhook to the project configuration in your GitLab instance pointing to Snolla. Create a user in Bugzilla that is allowed to comment on bugs. Setup a WSGI server to run Snolla on. In the project\u0026rsquo;s README file, a sample setup using nginx and uwsgi is described. Copy the file snolla.conf.example to /etc/snolla.conf and change it to suit your needs. For more information on how to setup Snolla, refer to the README at its github page.\nFeedback? Contact me!\n","permalink":"https://nblock.org/2014/08/19/snolla-a-gitlab-to-bugzilla-bridge/","summary":"Snolla is a simple project that aims to connect GitLab with Bugzilla.","title":"Snolla"},{"content":"At work we use a print solution that consists of two parts:\nA Konica Minolta bizhub C284e printer An EFI Fiery E100 print server Each user has his own account on the printer and these credentials are needed for printing. EFI provides drivers for Windows and Mac OS X. As a Linux user, you can grab a Mac OS X PPD file from the vendor and hope that CUPS can deal with it. As long as you do not use any sort of authentication on the print server, the vendor\u0026rsquo;s PPD file works quite nice.\nThe Problem The Fiery driver (on Mac OS X) brings some sort of proprietary CUPS filter that mixes account credentials to the actual document sent to the printer. Without this proprietary filter, one can\u0026rsquo;t print on such a device. This problem may be solved by sniffing the communication between an authenticated user and the print server and by writing a custom CUPS filter. Download the files described below.\nGet hold of an authenticated PostScript document Either use tcpdump/wireshark or netcat to get hold of an authenticated PostScript document.\nnc -l -p 9100 \u0026gt; sniffed.file Change the IP address for a configured printer and print any document you wish.\nExtract required data You need some data from the following sections of the sniffed file.\n[snipped] %%EFIUATag: some_base64_encoded_blob\u0026#34; [snipped] %%BeginSetup %%BeginFeature: *EFUserAuthName username /XJXsetUserName where { pop \u0026lt;username_hash\u0026gt; XJXsetUserName} if %%EndFeature %%BeginFeature: *EFUserAuthPwd password /XJXsetAccessCode where { pop \u0026lt;password_hash\u0026gt; XJXsetAccessCode} if %%EndFeature\u0026#34; [snipped] Use the following script to extract the required data and generate a config file for the CUPS filter described in the next section. Run it on the sniffed file and store the results in /etc/cups/ppd/fieryauth.conf.\n#!/bin/sh # extract fiery account credentials and print them to stdout sed -n -r -e \u0026#39;s/%%EFIUATag: (.*)/TAG=\u0026#34;\\1\u0026#34;/p\u0026#39; \u0026#34;$1\u0026#34; sed -n -r -e \u0026#39;s/.*EFUserAuthName (.*)/USERNAME=\u0026#34;\\1\u0026#34;/p\u0026#39; \u0026#34;$1\u0026#34; sed -n -r -e \u0026#39;s/.*XJXsetUserName.*\u0026lt;(.*)\u0026gt;.*/USERNAME_HASH=\u0026#34;\\1\u0026#34;/p\u0026#39; \u0026#34;$1\u0026#34; sed -n -r -e \u0026#39;s/.*EFUserAuthPwd (.*)/PASSWORD=\u0026#34;\\1\u0026#34;/p\u0026#39; \u0026#34;$1\u0026#34; sed -n -r -e \u0026#39;s/.*XJXsetAccessCode.*\u0026lt;(.*)\u0026gt;.*/PASSWORD_HASH=\u0026#34;\\1\u0026#34;/p\u0026#39; \u0026#34;$1\u0026#34; $ ./extract.sh sniffed.file TAG=\u0026#34;some_base64_encoded_blob\u0026#34; USERNAME=\u0026#34;username\u0026#34; USERNAME_HASH=\u0026#34;username_hash\u0026#34; PASSWORD=\u0026#34;password\u0026#34; PASSWORD_HASH=\u0026#34;password_hash\u0026#34; A custom CUPS filter The purpose of the custom CUPS filter is to mix the authentication data to each PostScript document, that you send to the print server. Copy the following to /usr/lib/cups/filter/fieryauth and make the file executable.\n#!/bin/sh # user configuration . /etc/cups/ppd/fieryauth.conf # insert EFIUATag afer line 10 and insert credentials after \u0026#39;BeginSetup\u0026#39; sed -e \u0026#34;10a %%EFIUATag: ${TAG}\u0026#34; -e \u0026#34;/BeginSetup/a %%BeginFeature: *EFUserAuthName ${USERNAME}\\n /XJXsetUserName where { pop \u0026lt;${USERNAME_HASH}\u0026gt; XJXsetUserName} if\\n %%EndFeature\\n %%BeginFeature: *EFUserAuthPwd ${PASSWORD}\\n /XJXsetAccessCode where { pop \u0026lt;${PASSWORD_HASH}\u0026gt; XJXsetAccessCode} if\\n %%EndFeature\u0026#34; Modify the PPD from the vendor One step remains, you need to modify the PPD from the vendor in order to get CUPS to use your custom filter. Get it from EFI\u0026rsquo;s website and append the following lines at the end of the PPD file:\n*cupsFilter: \u0026#34;application/vnd.cups-raw 0 fieryauth\u0026#34; *cupsFilter: \u0026#34;application/vnd.cups-command 0 commandtops\u0026#34; *cupsFilter: \u0026#34;application/vnd.cups-postscript 0 fieryauth\u0026#34; The rest is easy, set up your printer with CUPS as usual, select the modified PPD and you\u0026rsquo;re done.\nA last note Currently, we do not know how the proprietary Fiery CUPS filter calculates the values from above (EFIUATag, USERNAME_HASH, PASSWORD, PASSWORD_HASH). If you know it, please get in contact with me!\nAdditional reference and credits Wolfgang Silbermayr http://casa.apertus.es/blog/2011/06/howto-account-tracking-konica-minolta-c220-under-linux/ ","permalink":"https://nblock.org/2013/11/15/linux-and-a-fiery-print-server/","summary":"How to print on a Fiery/Konica Minolta print setup as an authenticated Linux user.","title":"Linux and a Fiery/Konica Minolta print setup"},{"content":"I use Thunderbird at home and in the office and one thing which annoys me, is that, by default Thunderbird just checks the INBOX of an IMAP account for new mail. To check each folder of an IMAP account for new mail, simply change\nmail.check_all_imap_folders_for_new from false to true in the config editor.\n","permalink":"https://nblock.org/2013/06/20/check-all-folders-in-thunderbird/","summary":"How to check all IMAP folders for new messages in Thunderbird","title":"Check all IMAP folders for new messages in Thunderbird"},{"content":"I am transitioning GPG keys from an old 1024-bit DSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time, but I prefer all new correspondance to be encrypted in the new key, and will be making all signatures going forward with the new key.\nHere is my transition statement:\n-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I am transitioning GPG keys from an old 1024-bit DSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time, but I prefer all new correspondance to be encrypted in the new key, and will be making all signatures going forward with the new key. This transition document is signed with both keys to validate the transition. If you have signed my old key, I would appreciate signatures on my new key as well, provided that your signing policy permits that without reauthenticating me. The old key, which I am transitional away from, is: pub 1024D/71AE2C33 2008-05-13 [expires: 2013-06-04] Key fingerprint = 2C77 1EA9 A279 2B2B CF2A E246 C710 233D 71AE 2C33 The new key, to which I am transitioning, is: pub 4096R/27415CF9 2013-02-23 [expires: 2018-02-22] Key fingerprint = 89C9 5CF0 871D 6EC1 0A3F ECD9 741E 93C2 2741 5CF9 To fetch the full new key from a public key server using GnuPG, run: gpg --recv-key 741E93C227415CF9 If you have already validated my old key, you can then validate that the new key is signed by my old key: gpg --check-sigs 741E93C227415CF9 If you then want to sign my new key, a simple and safe way to do that is by using caff (shipped in Debian as part of the \u0026#34;signing-party\u0026#34; package) as follows: caff 741E93C227415CF9 Find contact details at http://nblock.org/about if you have any questions about this document or this transition. Florian Preinstorfer 23-02-2013 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEAREKAAYFAlEo2PIACgkQxxAjPXGuLDMhAQCeKkIJQxe6UgogzpDpSQswe/Ms 1woAnAuweKXiy4NSmCxXVYr1hEmbSxKOiQIcBAEBCgAGBQJRKNjyAAoJECHsEPEc hdl8WoEP+gOW9jDKR2thjzyaIHpFp4NwMl6nRmJf+y2KaoiYMqdKa7VsytRRCC88 JQR/ZWKZoD6qSBwMMvQdicUkpmyo7nmW/HPbyQvtLyx95CkinumnEV/plEcZf5vx JW1TvWrmKispLmRP+0UUc3sQu/7VCEnJ/6n6f2NKdjKDEQwvPkmfMVNG0AhT+TAb EhMc8TPdmqYFWbro8daFeaJtSh6+MfA8kKFFfipL93SpGRiU+xYbxvfhEA+rmvQO 1/56y9/OUet6FC9m3rkZTKa3kr9WaTojgxqYX+2caWtV8X1IwHoj7MaLQv26DxSx FoOdInWwMqZv53KkOLH/ZAA+ULfqpiJ4c8rcAIgg7gMl+Ltpl3CUsWRfiIVpo57v ibadggcaqemYhtJrAtd8N0Jjg8GssWqPJXV5TQgyJgRqu18xHqjYCFzwzYUkkefD 8JrvGiqAJUN1mxXMB7SXDUb6I7F8IIkz6zBlEUHdGr8WOee0KaUktpCtTAybnL8T Arot2UgDA4IRSmtS82I4kBhNeo47paoZ0G+cC/rJc260vvZoWlPJSl9pv32p9UHs 5soMy3w+6w3XB8JFshmnSaR97bG6uCs68V3WtyC04d0h9KFEWemh5BRZo2yM6Bas X+qIeh2f2oyu/iAoBXgaRQftKkQCiX5rdaGLNpO09PeJ0oNoYDoP =5a9e -----END PGP SIGNATURE----- You can also download the above statement from here. Use the following commands to verify the integrity of the transition statement:\n$ gpg --recv-key 741E93C227415CF9 $ curl https://nblock.org/2013/02/23/new-gnupg-key/gpg-transition-statement-741E93C227415CF9.txt.asc | gpg --verify ","permalink":"https://nblock.org/2013/02/23/new-gnupg-key/","summary":"GnuPG key transition statement","title":"GnuPG key transition statement"},{"content":"At work, we use Mediawiki to document stuff. It is nice to have up to date wiki articles, but the process of writing them can be painful sometimes. When it comes to editing text, I want to have a nice editing environment and not the crappy online text editors provided by all kinds of CMS and Wikis. Simply put, I want to have the same editing environment as for every other document I touch. For me, this editing environment happens to be Vim with a few useful plugins. So, this short blog post shows some of the things I use for editing Mediawiki documents with Vim. Take a look at my vimrc if you are interested in my other Vim settings.\nSyntax highlighting One of the most important things when editing files is to have some visual support by using syntax highlighting. For Mediawiki, there are a few syntax highlighting plugins available. I use mediwiki.vim.\nQuick navigation in Mediawiki documents with Tagbar A nice way for navigating inside a file is provided by Tagbar. From the webpage:\nVim plugin that displays tags in a window, ordered by class etc.\nSince there is no ctags support for Mediawiki available in Tagbar, you have to make it yourself. Here is how I did it.\nAdd this to your .ctags file:\n--langdef=mediawiki --langmap=mediawiki:.wiki --regex-mediawiki=/^=[[:space:]]?([^=]+)[[:space:]]?=$/\\1/h,header/ --regex-mediawiki=/^==[[:space:]]?([^=]+)[[:space:]]?==$/. \\1/h,header/ --regex-mediawiki=/^===[[:space:]]?([^=]+)[[:space:]]?===$/. \\1/h,header/ --regex-mediawiki=/^====[[:space:]]?([^=]+)[[:space:]]?====$/. \\1/h,header/ --regex-mediawiki=/^=====[[:space:]]?([^=]+)[[:space:]]?=====$/. \\1/h,header/ Next up, add this to your .vimrc file:\n\u0026#34; tagbar mediawiki support autocmd FileType mediawiki :!ctags % let g:tagbar_type_mediawiki = { \\ \u0026#39;ctagstype\u0026#39; : \u0026#39;mediawiki\u0026#39;, \\ \u0026#39;kinds\u0026#39; : [ \\ \u0026#39;h:header\u0026#39;, \\ ], \\ \u0026#39;sort\u0026#39; : 0 \\ } When you edit a wiki page with Vim and it recognises the file as a Mediawiki document, it will automatically run ctags on the file and the results will be ready to use with the Tagbar plugin.\nSnippet support for UltiSnips Want to have some snippet support for Mediawiki in UltiSnips? Take a look at my snippets for Mediawiki.\n","permalink":"https://nblock.org/2012/12/06/editing-mediawiki-documents-with-vim/","summary":"Setting up Vim to work with Mediawiki documents.","title":"Editing Mediawiki documents with Vim"},{"content":"Around May 2012 I started to use GnuCash for private accounting. GnuCash is a very feature rich and reliable free software application and it allows me to keep track of my personal finances. I really like it.\nBut doing private accounting means, that I need to keep track of all my expenses and my income which can be hard to persist at. Adding financial transactions to GnuCash on a day to day basis seems to be an overkill for me. I won\u0026rsquo;t do it all too long. But what I can do pretty quickly is to note my transactions with a note taking application on my mobile phone. Once a week, I get back to my notes and import them in GnuCash.\nFurthermore, all the banks I use offer a way to export transactions in some format. Most of them are proprietary and none of them are compatible with GnuCash. The result is, that importing these transactions is an error prone and tedious task.\nSo, to ease import from the mentioned sources, I started to write a little script that can convert various import formats such as csv and txt files and export them in a GnuCash friendly csv format. I am aware that there are other formats that should be used instead of csv, but the csv importer of GnuCash is the fastest possible way for me to import transactions.\nCurrently, the following input formats are supported:\nEasybank AG Raiffeisenbank (ELBA) NoteMe (Android notetaking application) Paypal The script is available on github.\nFeedback? Contact me!\n","permalink":"https://nblock.org/2012/11/29/gcimport-a-simple-converter-for-gnucash/","summary":"gcimport is a simple script that can be used to convert various input files for GnuCash.","title":"gcimport"},{"content":"Status This page is heavily outdated. For a more or less useful documentation refer to the Owncloud administrators manual for other webservers.\nIntroduction Over the last few months I tested Owncloud so that I could move my calendars away from Google. I also want to have my contacts in sync across my devices (phone, notebook). So, this blog post is about syncing calendars (with CalDAV) and contacts (with CardDAV) between Thunderbird and my Android phone using Owncloud on Nginx.\nNginx Just install the standard Arch Linux package from [community]. There is no need to patch Nginx with additional WebDAV modules or anything like that. Just take a look at the configuration below.\nOwncloud requires a few PHP extensions, make sure you have them installed and configured as well. Take a look at the Arch Wiki for more information on setting up Nginx and PHP.\nOwncloud Get the latest tarball from Owncloud and extract it in your webroot. Here is the Nginx configuration I use:\n# owncloud server { listen 80; server_name owncloud.example.org; rewrite ^ https://$server_name$request_uri? permanent; # enforce https } # owncloud (ssl/tls) server { listen 443 ssl; server_name owncloud.example.org; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; root /srv/http/owncloud; index index.php; client_max_body_size 20M; # set maximum upload size # deny direct access location ~ ^/(data|config|\\.ht|db_structure\\.xml|README) { deny all; } # default try order location / { try_files $uri $uri/ @webdav; } # owncloud WebDAV location @webdav { fastcgi_split_path_info ^(.+\\.php)(/.*)$; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include fastcgi_params; } # enable php location ~ \\.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include fastcgi_params; } } Finish setup by creating your Owncloud admin account.\nThunderbird To get CalDAV/CardDAV support for Thunderbird and Lightning respectively, use the SOGo Connector for Thunderbird.\nHere is the CardDAV URL for your address book:\nhttps://owncloud.example.org/apps/contacts/carddav.php/addressbooks/USERNAME/ADDRESSBOOKNAME/ Here is the CalDAV URL for your calendar (1 URL for each calendar):\nhttps://owncloud.example.org/apps/calendar/caldav.php/calendars/USERNAME/CALENDARNAME/ Android Marten Gajda has written sync clients for CalDAV (CalDAV-Sync) and CardDAV (CardDAV-Sync). Either use the free version from Android Play Shop (Market) or buy the paid version containing additional features. You can find the documentation for Owncloud in the dmfswiki. You could also use aCal but I haven\u0026rsquo;t tried it.\nIn short, use this URL for your address book (without https://):\nowncloud.example.org/apps/contacts/carddav.php/addressbooks/ Below is the URL for your calendars. If everything works as expected, you should be prompted with a list of available calendars:\nowncloud.example.org/owncloud/apps/calendar/caldav.php/calendars/ Feedback? Contact me!\nAdditional reference and credits Daniel Hofer http://owncloud.org/support/setup-and-installation/linux-server ","permalink":"https://nblock.org/2012/03/12/nginx-and-owncloud/","summary":"A blog post about setting up Nginx and Owncloud to keep contacts and calendars in sync.","title":"Setting up Nginx and Owncloud"},{"content":"I\u0026rsquo;ve been using the Python Debugger a lot lately and decided to write a cheatsheet that covers most of the basic stuff. The cheatsheet is suitable for pdb and ipbd (just replace pdb with ipbd).\nThe cheatsheet is available as PNG and PDF file on the GitHub release page.\nCredits and References https://anilattech.wordpress.com/2011/06/29/python-debugger-pdb-cheatsheet https://pythonconquerstheuniverse.wordpress.com/2009/09/10/debugging-in-python http://docs.python.org/library/pdb.html ","permalink":"https://nblock.org/2011/11/15/pdb-cheatsheet/","summary":"A cheatsheet for the Python Debugger (pdb and ipbd).","title":"Python Debugger Cheatsheet"},{"content":"I use DokuWiki pretty heavily for my studies and at my job to accomplish all kind of tasks. Another piece of software I use on a daily basis is the editor VIM.\nNow, when it comes to editing, syntax highlighting is a very useful thing, especially for long files. VIM provides syntax files for more than 130 languages, even more can be found on vim-scripts. There is even syntax highlighting for DokuWiki available, but I was not very happy with it.\nSo, here is my implementation. You can find it on my github page or on it\u0026rsquo;s vim-script page.\nScreenshots The colorschemes in use are neon and slate.\nInstallation Just grab a copy of dokuwiki.vim and copy the file in your ~/.vim/syntax/ directory.\nA better alternative is to use a VIM package manager such as Vundle.\nSince DokuWiki uses no particular file ending you need to set the filetype manually with the command :set ft=dokuwiki. You can use the DokuWiki comment plugin to automate the task. Simply append the line below to your wiki page:\n/* vim: set ft=dokuwiki */ This line will only be visible while editing the wiki page.\nDevelopment Some important parts of the DokuWiki syntax are still missing. Just fork the project and start hacking.\n","permalink":"https://nblock.org/2011/10/04/vim-syntax-highlighting-for-dokuwiki/","summary":"Enable syntax highlighting when editing DokuWiki pages with VIM.","title":"VIM syntax highlighting for DokuWiki"},{"content":"Sometimes it is necessary to update the certificate store on a rooted Android device. Here are just a few reasons for doing it:\nJust another CA got compromised. You want to add a CA that is not included in the official certificate store (e.g CAcert). You operate your own CA and want your device to trust it (companies come to mind). This blog post focuses on a rooted Samsung Galaxy S GT-I9000, running a recent version of Android (Version: 2.3.4, DarkyROM). Some paths and the file system type may differ on other devices.\nRequirements The following is required to update the Android certificate store:\nA rooted Android device. Without being root on your phone you are doomed to wait for updates provided by either Google or the phone manufacturer. keytool, it comes with recent version of the JRE. The Bouncy Castle Crypto API. Either adb from Android SDK or a Terminal Emulator on the phone. I used the free Android Terminal Emulator from Android Market. Obtaining the certificate store from the device Android stores its certificates in /system/etc/security/cacerts.bks. When you mount the SD card, /system will not show up. Thus, copy cacerts.bks to the /sdcard/ before mounting it.\nandroid~$ cp /system/etc/security/cacerts.bks /sdcard Then, mount your SD card and copy the file on your box.\nbox~$ pmount /dev/sdb box~$ cp /media/sdb/cacerts.bks ~ Removing certificates from the store First, find the certificate of a CA you want to remove. Remember the alias of the certificate (in this example 95).\nbox~$ keytool -keystore cacerts.bks -storetype BKS\\ -provider org.bouncycastle.jce.provider.BouncyCastleProvider\\ -storepass changeit -v -list | grep -A 4 -B 8 diginotar Aliasname: 95 Erstellungsdatum: 03.03.2011 Eintragstyp: trustedCertEntry Eigentümer: C=NL,O=DigiNotar,CN=DigiNotar Root CA,E=info@diginotar.nl Aussteller: C=NL,O=DigiNotar,CN=DigiNotar Root CA,E=info@diginotar.nl Seriennummer: c76da9c910c4e2c9efe15d058933c4c Gültig von: Wed May 16 19:19:36 CEST 2007 bis: Mon Mar 31 20:19:21 CEST 2025 Zertifikat-Fingerprints: MD5: 7A:79:54:4D:07:92:3B:5B:FF:41:F0:0E:C7:39:A2:98 SHA1: C0:60:ED:44:CB:D8:81:BD:0E:F8:6C:0B:A2:87:DD:CF:81:67:47:8C SHA256: 0D:13:6E:43:9F:0A:B6:E9:7F:3A:02:A5:40:DA:9F:06:41:AA:55:4E:1D:66:EA:51:AE:29:20:D5:1B:2F:72:17 Signaturalgorithmusname: SHA1WithRSAEncryption Eigentümer: C=NL,O=DigiNotar,CN=DigiNotar Root CA,E=info@diginotar.nl Aussteller: C=NL,O=DigiNotar,CN=DigiNotar Root CA,E=info@diginotar.nl Remove it:\nbox~$ keytool -keystore cacerts.bks -storetype BKS\\ -provider org.bouncycastle.jce.provider.BouncyCastleProvider\\ -storepass changeit -delete -alias 95 Now, if you list the certificates inside the store again, you should no longer see this particular certificate.\nAdding certificates to the store This is a common task, especially if you are a CAcert user. Just obtain the root certificate and put it in your $HOME.\nbox~$ #assume you want to add root.crt to the keystore box~$ keytool -keystore cacerts.bks -storetype BKS\\ -provider org.bouncycastle.jce.provider.BouncyCastleProvider\\ -storepass changeit -importcert -trustcacerts -alias myalias -file root.crt Be sure to check the fingerprint of the certificate and use a meaningful alias when importing it.\nPushing the certificate store back on the device Simply mount your SD card and copy the modified cacerts.bks back on the device.\nbox~$ cp ~/cacerts.bks /media/sdb/ box~$ pumount /media/sdb Copy cacerts.bks back to /system/etc/security/. To accomplish this step, you need to remount /system as read/write:\nandroid~$ su #required to remount /system android~# mount -o rw,remount /system android~# cp /sdcard/cacerts.bks /system/etc/security/cacerts.bks android~# mount -o ro,remount /system Finally, reboot the device and be happy.\nReferences http://blog.mylookout.com/2011/08/for-rooted-android-device-users-open-heart-surgery-on-your-android-ca-store/ http://wiki.cacert.org/FAQ/ImportRootCert#Android_Phones http://silkemeyer.net/root-zertifikate-von-cacert-in-android-importieren Update You can use CACertMan, a free App that allows you to browse, search, backup, restore and delete SSL Root Authority certificates from the Android certificate store directly on a rooted phone. I wrote a simple script that automates adding CAcert certificates to the Android certificate store. You can find it here. ","permalink":"https://nblock.org/2011/09/03/how-to-update-the-android-certificate-store/","summary":"How to update the Android certificate store on a rooted device (Samsung Galaxy S GT-I9000).","title":"How to update the Android certificate store"},{"content":"Hi, and welcome on my site. First off, I want to tell you something about the static blogging engine I use. It\u0026rsquo;s called rstblog and it has been written by Armin Ronacher. I discovered this fine piece of software when I stumbled over Armins\u0026rsquo;s blog entry Python and the Principle of Least Astonishment.\nThese are my key points for choosing rstblog:\nFree software (BSD) Simple Produces static output Written in Python Write blog entries with vim Manage blog entries with git Does the trick for me Installation You can either checkout and build rstblog from source or use my PKGBUILD file in case you are an Arch Linux user.\nGetting started rstblog requires a certain directory structure to produce your blog. Here is the layout of example.org, a simple blog:\nexample.org ├── 2011 │ ├── 08 │ │ └── 31 │ │ └── 1st-blogpost.rst │ └── 09 │ └── 15 │ ├── nice.rst │ └── story.rst ├── about.rst ├── config.yml └── _templates └── layout.html To get started, just download the example above.\nFirst off, adjust the config file (config.yml) to suit your needs.\n--- active_modules: [pygments, tags, blog, latex] author: your name canonical_url: http://example.org modules: pygments: style: friendly The special folder _templates contains the Jinja2 template(s) that rstblog uses to create your blog. Just create the file layout.html, which is the most important building block for producing your blog.\n\u0026lt;!doctype html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=utf-8\u0026gt; \u0026lt;title\u0026gt;{% block title %}{% endblock %}\u0026lt;/title\u0026gt; \u0026lt;link href=\u0026#34;/feed.atom\u0026#34; rel=\u0026#34;alternate\u0026#34; title=\u0026#34;Feed\u0026#34; type=\u0026#34;application/atom+xml\u0026#34;\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;ul\u0026gt; \u0026lt;li\u0026gt;\u0026lt;a href=\u0026#34;/\u0026#34;\u0026gt;home\u0026lt;/a\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;li\u0026gt;\u0026lt;a href=\u0026#34;/archive/\u0026#34;\u0026gt;archive\u0026lt;/a\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;li\u0026gt;\u0026lt;a href=\u0026#34;/tags/\u0026#34;\u0026gt;tags\u0026lt;/a\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;/ul\u0026gt; {% block body %}{% endblock %} \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Blog entries rstblog distinguishes (sort of) between two different blog entries:\nRegular blog entries Static blog entries Regular blog entries Regular blog entries should be put in a directory structure that looks like this: year/month/day. Create such a structure and put your first blog entry (my-first-blog-entry.rst) in there.\npublic: yes tags: [rstblog, firsttry] summary: This is my first blog entry My first blog entry =================== Hello World! Static blog entries rstblog also allows you to create static blog entries such as an about page. Simply put a rst formatted file in the root directory of your blog.\npublic: yes About me ======== This is me. Building the blog Finally, build and view the result:\n$ run-rstblog build $ run-rstblog serve Serving on http://127.0.0.1:5000/ Now point your browser to http://127.0.0.1:5000 to view the results of your work. The about entry should work as well: http://127.0.0.1:5000/about.\nAdditional stuff Static content Static content (css files, js files, …) should be placed in the directory static. During the build process the content of this folder will be copied to the directory _build/static. You can easily link to them from your blog entries using `target \u0026lt;/static/target\u0026gt;`_.\nDesign Your layout.html file contains Jinja2 templating code. What\u0026rsquo;s missing is a css file that nicely formats your content. Just create one and put it into the directory static. Don\u0026rsquo;t forget to link to it in your layout.html!\nTagging Tagging is a nice feature that helps classifying blog entries. The first blog entry (Regular blog entries) already uses two tags, rstblog and firsttry. These tags will be used to create a tag overview page, viewable at http://127.0.0.1:5000/tags. Use as many tags as you like to classify your content.\nCustomize blog generation In the default setting rstblog will build archive and tag pages for you. This is of course also template based. If you wish to modify some of the templates, copy the affected files from rstblog/rstblog/templates to your _templates folder and adjust them accordingly.\nPublish your blog This is really simple. Just copy your _build folder to your public html folder and you are done.\nFurther information This cheat sheet might be interesting if you need some hints for writing content in reStructuredText.\nThis might also help you to setup your own rstblog site.\n","permalink":"https://nblock.org/2011/08/31/1st-blogpost/","summary":"A short blog post on how to use rstblog.","title":"About using the static blogging engine rstblog"},{"content":"Hi and welcome to my site. My name is Florian Preinstorfer and I\u0026rsquo;m a software engineer with a focus on free software, security and infrastructure operations.\nContact You can reach me via:\nphone: +43-677-62572799 e-mail: blog (at) nblock (dot) org [GPG available] github gitlab GPG/PGP Key id: 0xEB446E361015043D Key fingerprint: 4DAC C118 E918 A609 9AFE C0F3 EB44 6E36 1015 043D You can get my public key from here.\nImpressum (site notice) Florian Preinstorfer 4951 Polling im Innkreis Austria ","permalink":"https://nblock.org/about/","summary":"\u003cp\u003eHi and welcome to my site. My name is Florian Preinstorfer and I\u0026rsquo;m a software engineer with a focus on free software,\nsecurity and infrastructure operations.\u003c/p\u003e\n\u003ch2 id=\"contact\"\u003eContact\u003c/h2\u003e\n\u003cp\u003eYou can reach me via:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ephone: +43-677-62572799\u003c/li\u003e\n\u003cli\u003ee-mail: blog (at) nblock (dot) org [GPG available]\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/nblock\"\u003egithub\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://gitlab.com/nblock\"\u003egitlab\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"gpgpgp\"\u003eGPG/PGP\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eKey id: \u003ccode\u003e0xEB446E361015043D\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eKey fingerprint: \u003ccode\u003e4DAC C118 E918 A609 9AFE C0F3 EB44 6E36 1015 043D\u003c/code\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eYou can get my public key from \u003ca href=\"gpg-pubkey-0xEB446E361015043D.asc\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003ch2 id=\"impressum-site-notice\"\u003eImpressum (site notice)\u003c/h2\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eFlorian Preinstorfer\n4951 Polling im Innkreis\nAustria\n\u003c/code\u003e\u003c/pre\u003e","title":"About me"},{"content":"This page lists some of my code snippets. Visit my github page or gitlab page for more information. A list of free software contributions is available upon request.\nFeeds\nDIY Atom feeds in times of social media and paywalls\nAUR packages for Arch Linux\nMy AUR packages for Arch Linux.\nofxstatement-austrian\nConvert statements of austrian banks to OFX using ofxstatement.\nmrpassword2keepass\nConnect to a MrPassword instance and import all passwords into a Keepass database (kdbx).\nsnolla\nConnect GitLab with Bugzilla.\ngcimport\nConvert various input files (csv, txt) to csv files that can be easily imported with GnuCash.\nvim-dokuwiki\nEnable syntax highlighting when editing DokuWiki pages with VIM.\npdb-cheatsheet\nA cheatsheet for the Python Debugger (pdb and ipdb).\nunused-bibtex\nA simple script to find unused bibtex keys.\nfeedcheck\nCheck availability of feeds (RSS, Atom, …) in OPML or plain input format (file or stdin).\nexchange2ical\nExtract calendar entries from Exchange 2003 public folder and export them as ics.\n","permalink":"https://nblock.org/code/","summary":"\u003cp\u003eThis page lists some of my code snippets. Visit my \u003ca href=\"https://github.com/nblock\"\u003egithub page\u003c/a\u003e or \u003ca href=\"https://gitlab.com/nblock\"\u003egitlab\npage\u003c/a\u003e for more information. A list of free software contributions is available upon request.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/PyFeeds/PyFeeds\"\u003eFeeds\u003c/a\u003e\u003cbr\u003e\nDIY Atom feeds in times of social media and paywalls\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://aur.archlinux.org/packages/?SeB=m\u0026amp;K=notizblock\"\u003eAUR packages for Arch Linux\u003c/a\u003e\u003cbr\u003e\nMy AUR packages for Arch Linux.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/nblock/ofxstatement-austrian\"\u003eofxstatement-austrian\u003c/a\u003e\u003cbr\u003e\nConvert statements of austrian banks to OFX using ofxstatement.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/nblock/mrpassword2keepass\"\u003emrpassword2keepass\u003c/a\u003e\u003cbr\u003e\nConnect to a MrPassword instance and import all passwords into a Keepass database (kdbx).\u003c/p\u003e","title":"Code"},{"content":"A non-complete list of talks I gave at conferences and local meetups. Head over to workshops for professional in-house trainings and workshops.\nTitle Details Gitea 2024, VALUG, notes Home Assistant 2024, VALUG, notes FIDO2, WebAuthn, Passkey und ID Austria 2023, VALUG, notes Wireguard, Tailscale and Headscale 2022, VALUG, notes Taskwarrior 2019, VALUG, slides WireGuard 2018, Linuxwochen Linz, slides Ein Blick auf SSH 2017, Grazer Linuxtage, slides Ein Blick auf SSH 2017, VALUG, slides Configuration Management \u0026amp; DevOps 2017, VALUG, with s.n., slides and examples are available upon request On building a free software based development environment for a small company 2016, LinuxDaysCZ, slides Scrapy 2016, VALUG, slides, demos, source btrfs 2015, Linuxwochen Linz, slides, notes, source Show \u0026amp; tell: Web development with Python using Flask \u0026amp; SQLAlchemy 2015, VALUG Show \u0026amp; tell: Staying up to date with TinyTinyRSS 2015, VALUG sport tracking and GNU/Linux 2014, VALUG, slides, notes, source ownCloud 2014, Technologieplauscherl Steyr, slides, source ownCloud 2014, VALUG, slides, source btrfs 2013, VALUG, slides, notes, source Introduction to private financial-accounting with GnuCash 2013, VALUG, with s.n., slides and examples are available upon request Tor - The Onion Router 2012, VALUG, slides, source Participating in Free Software Projects 2012, VALUG, with laxity and silwol, slides, source Git Workshop 2012, LiWoLi, with silwol, slides, source Covert Channel Protocol 2012, Security Forum, slides Einstieg in die verteilte Versionskontrolle mit Git 2011, VALUG, with silwol, slides, source Secure Web Coding for Beginners 2010, Hacking Night, with dosbartjones, slides ","permalink":"https://nblock.org/talks/","summary":"\u003cp\u003eA non-complete list of talks I gave at conferences and local meetups. Head over to \u003ca href=\"/workshops\"\u003eworkshops\u003c/a\u003e for\nprofessional in-house trainings and workshops.\u003c/p\u003e\n\u003ctable\u003e\n  \u003cthead\u003e\n      \u003ctr\u003e\n          \u003cth\u003eTitle\u003c/th\u003e\n          \u003cth\u003eDetails\u003c/th\u003e\n      \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eGitea\u003c/td\u003e\n          \u003ctd\u003e2024, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"https://valug.at/events/2024-03-08-gitea/\"\u003enotes\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eHome Assistant\u003c/td\u003e\n          \u003ctd\u003e2024, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"https://valug.at/events/2024-01-12-home-assistant/\"\u003enotes\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eFIDO2, WebAuthn, Passkey und ID Austria\u003c/td\u003e\n          \u003ctd\u003e2023, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"https://valug.at/events/2023-11-17-fido2-webauthn-passkeys/\"\u003enotes\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eWireguard, Tailscale and Headscale\u003c/td\u003e\n          \u003ctd\u003e2022, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"https://valug.at/events/2022-06-10/\"\u003enotes\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eTaskwarrior\u003c/td\u003e\n          \u003ctd\u003e2019, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-valug-taskwarrior.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eWireGuard\u003c/td\u003e\n          \u003ctd\u003e2018, \u003ca href=\"https://www.linuxwochen-linz.at/2018/programm/\"\u003eLinuxwochen Linz\u003c/a\u003e, \u003ca href=\"slides-liwoli18-wireguard.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eEin Blick auf SSH\u003c/td\u003e\n          \u003ctd\u003e2017, \u003ca href=\"https://glt17-programm.linuxtage.at/\"\u003eGrazer Linuxtage\u003c/a\u003e, \u003ca href=\"slides-glt17-ssh.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eEin Blick auf SSH\u003c/td\u003e\n          \u003ctd\u003e2017, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-valug-ssh.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eConfiguration Management \u0026amp; DevOps\u003c/td\u003e\n          \u003ctd\u003e2017, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, with s.n., slides and examples are available upon request\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eOn building a free software based development environment for a small company\u003c/td\u003e\n          \u003ctd\u003e2016, \u003ca href=\"https://www.linuxdays.cz/2016/en/schedule/\"\u003eLinuxDaysCZ\u003c/a\u003e, \u003ca href=\"slides-linuxdays-cz-2016.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eScrapy\u003c/td\u003e\n          \u003ctd\u003e2016, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-scrapy.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-scrapy-demos\"\u003edemos\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-scrapy-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003ebtrfs\u003c/td\u003e\n          \u003ctd\u003e2015, \u003ca href=\"http://linuxwochen-linz.at\"\u003eLinuxwochen Linz\u003c/a\u003e, \u003ca href=\"slides-btrfs-linuxwochen-linz-20150530.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"notes-btrfs-linuxwochen-linz-20150530.txt\"\u003enotes\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-btrfs-slides/tree/linuxwochen-linz-20150530\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eShow \u0026amp; tell: Web development with Python using Flask \u0026amp; SQLAlchemy\u003c/td\u003e\n          \u003ctd\u003e2015, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eShow \u0026amp; tell: Staying up to date with TinyTinyRSS\u003c/td\u003e\n          \u003ctd\u003e2015, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003esport tracking and GNU/Linux\u003c/td\u003e\n          \u003ctd\u003e2014, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-sportstracker.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"notes-sportstracker.txt\"\u003enotes\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-sporttracker-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eownCloud\u003c/td\u003e\n          \u003ctd\u003e2014, \u003ca href=\"https://github.com/mpopp/Technologieplauscherl-Steyr\"\u003eTechnologieplauscherl Steyr\u003c/a\u003e, \u003ca href=\"slides-owncloud-technologieplauscherl.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://gitlab.com/nblock-owncloud-slides/nblock-owncloud-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eownCloud\u003c/td\u003e\n          \u003ctd\u003e2014, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-owncloud.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://gitlab.com/nblock-owncloud-slides/nblock-owncloud-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003ebtrfs\u003c/td\u003e\n          \u003ctd\u003e2013, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-btrfs-valug-20131213.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"notes-btrfs-valug-20131213.txt\"\u003enotes\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-btrfs-slides/tree/valug-20131213\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eIntroduction to private financial-accounting with GnuCash\u003c/td\u003e\n          \u003ctd\u003e2013, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, with s.n., slides and examples are available upon request\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eTor - The Onion Router\u003c/td\u003e\n          \u003ctd\u003e2012, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, \u003ca href=\"slides-tor.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/notizblock-tor-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eParticipating in Free Software Projects\u003c/td\u003e\n          \u003ctd\u003e2012, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, with \u003ca href=\"http://dhofer.com\"\u003elaxity\u003c/a\u003e and \u003ca href=\"https://silwol.net\"\u003esilwol\u003c/a\u003e, \u003ca href=\"slides-participating.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://gitlab.com/valug/participating-in-free-software-projects-slides\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eGit Workshop\u003c/td\u003e\n          \u003ctd\u003e2012, \u003ca href=\"http://liwoli.at/programm/2012/distributed-version-control-git\"\u003eLiWoLi\u003c/a\u003e, with \u003ca href=\"https://silwol.net\"\u003esilwol\u003c/a\u003e, \u003ca href=\"slides-git-liwoli-2012.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://github.com/nblock/slides-git\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eCovert Channel Protocol\u003c/td\u003e\n          \u003ctd\u003e2012, \u003ca href=\"https://www.securityforum.at/security-insights/\"\u003eSecurity Forum\u003c/a\u003e, \u003ca href=\"slides-ccp.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eEinstieg in die verteilte Versionskontrolle mit Git\u003c/td\u003e\n          \u003ctd\u003e2011, \u003ca href=\"https://valug.at\"\u003eVALUG\u003c/a\u003e, with \u003ca href=\"https://silwol.net\"\u003esilwol\u003c/a\u003e, \u003ca href=\"slides-git.pdf\"\u003eslides\u003c/a\u003e, \u003ca href=\"https://github.com/nblock/slides-git\"\u003esource\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eSecure Web Coding for Beginners\u003c/td\u003e\n          \u003ctd\u003e2010, \u003ca href=\"http://www.hackinggroup.at/securitygroup/hackingnight2010.php\"\u003eHacking Night\u003c/a\u003e, with \u003ca href=\"http://www.xing.com/profile/Florian_Brunner10\"\u003edosbartjones\u003c/a\u003e, \u003ca href=\"secure-webcoding-slides.pdf\"\u003eslides\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e","title":"Talks"},{"content":"You are interested in professional in-house training for developers and need someone who can explain the inner workings of modern, state-of-art development practices? I provide in-house development training and workshops for teams, packed with practical tips and lots of guided exercises.\nGit Git is the de facto distributed version control system used today. This workshop covers:\nThe history of Git: how it came to be and its place among competitors Basic understanding and usage Usage of the commandline, various graphical interfaces, and IDE integrations Collaboration in small, medium, and large teams Important Git features required by your team\u0026rsquo;s workflow Development tips and tricks from an experienced Git user The architecture and inner workings of Git Interoperation with other version control systems such as SVN Git services in your IT infrastructure (self-hosted, SAS) Planning and support for a successful integration or migration Common operating systems such as Linux, Windows and MacOS The duration of the workshop is 1-3 days, depending on your requirements and is available in German or English. The content of the workshop may be customized to suit your needs as best as possible. Feel free to contact me for more details or a tentative offer.\n","permalink":"https://nblock.org/workshops/","summary":"\u003cp\u003eYou are interested in professional in-house training for developers and\nneed someone who can explain the inner workings of modern, state-of-art\ndevelopment practices? I provide in-house development training and\nworkshops for teams, packed with practical tips and lots of guided\nexercises.\u003c/p\u003e\n\u003ch2 id=\"git\"\u003eGit\u003c/h2\u003e\n\u003cp\u003eGit is the de facto distributed version control system used today. This\nworkshop covers:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eThe history of Git: how it came to be and its place among competitors\u003c/li\u003e\n\u003cli\u003eBasic understanding and usage\u003c/li\u003e\n\u003cli\u003eUsage of the commandline, various graphical interfaces, and IDE\nintegrations\u003c/li\u003e\n\u003cli\u003eCollaboration in small, medium, and large teams\u003c/li\u003e\n\u003cli\u003eImportant Git features required by your team\u0026rsquo;s workflow\u003c/li\u003e\n\u003cli\u003eDevelopment tips and tricks from an experienced Git user\u003c/li\u003e\n\u003cli\u003eThe architecture and inner workings of Git\u003c/li\u003e\n\u003cli\u003eInteroperation with other version control systems such as SVN\u003c/li\u003e\n\u003cli\u003eGit services in your IT infrastructure (self-hosted, SAS)\u003c/li\u003e\n\u003cli\u003ePlanning and support for a successful integration or migration\u003c/li\u003e\n\u003cli\u003eCommon operating systems such as Linux, Windows and MacOS\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe duration of the workshop is 1-3 days, depending on your requirements\nand is available in German or English. The content of the workshop may\nbe customized to suit your needs as best as possible. Feel free to\n\u003ca href=\"/about\"\u003econtact me\u003c/a\u003e for more details or a tentative offer.\u003c/p\u003e","title":"Workshops and training"}]