March 05, 2026

hackergotchi for SparkyLinux

SparkyLinux

ElectronMail

There is a new application available for Sparkers: ElectronMail What is ElectronMail? Features: – Open Source – Reproducible builds – Cross platform – Full-text search – JavaScript-based/unlimited messages filtering – Offline access to the email messages – Multi accounts support – Automatic login into the app – Automatic login into the email accounts – Persistent email account…

Source

05 March, 2026 04:20PM by pavroo

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for February 2026

Learn more about deepin details, historical versions, user reviews, etc.: https://distrowatch.com/table.php?distribution=deepin I. February Community Data Overview II. Community Products 1. deepin 25.0.12 Internal Testing Launched: File Manager Enhancements, Multi-Screen & Audio Issues Fixed In February, the deepin 25 internal test version 25.0.12 was released, focusing on expanding file manager functionality and optimizing system stability: File Manager Efficiency Upgrade: Supports right-click to pin top tabs, allowing important directories to remain permanently accessible; when previewing images, the sidebar supports drag-and-drop enlargement, making viewing images more efficient for work. Email Function Enhancement: Added email printing feature to meet daily office needs. High-Frequency Issue Fixes: Resolved issues such ...Read more

05 March, 2026 06:24AM by xiaofei

March 03, 2026

March 02, 2026

hackergotchi for GreenboneOS

GreenboneOS

Emergency Patch: CVE-2026-20127 in Cisco Catalyst SD-WAN Actively Exploited Against Critical Infrastructure

On February 25th, 2026, a new critical severity CVE affecting Cisco Catalyst SD-WAN was both published and added to CISA’s Know Exploited Vulnerabilities (KEV) list. CVE-2026-20127 (CVSS 10) allows an unauthenticated remote attacker to gain administrative access on affected devices. The flaw is classified as an authentication bypass [CWE-287] caused by a faulty peering authentication […]

02 March, 2026 01:47PM by Joseph Lee

March 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2026/02

The 2nd monthly Sparky project and donate report of the 2026: – Linux kernel updated up to 6.19.5, 6.18.15-LTS, 6.12.74-LTS, 6.6.127-LTS – Sparky 8.2 Seven Sisters released – added Linux kernel 6.18 LTS to sparky repos; 6.6 LTS will be not updated any more Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send…

Source

01 March, 2026 03:43PM by pavroo

February 27, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

[STABLE RELEASE] Bunsenlabs Carbon Official ISOs

The BunsenLabs team are happy to announce our latest release, BunsenLabs Carbon.

Based on Debian Trixie, Carbon has had many improvements, including a new desktop appearance and assistance (coming soon) for users who want to experiment with Wayland.

If you have liked BL in the past, you're going to love this.

There is much more detail in the Release Notes: https://forums.bunsenlabs.org/viewtopic.php?id=9675

Downloads are available from the BunsenLabs website: https://www.bunsenlabs.org/installation.html

A big thank you to all the community members who contributed feedback, suggestions and code!

The BunsenLabs Team

27 February, 2026 12:00AM

February 26, 2026

hackergotchi for Tails

Tails

Tails 7.5

Changes and updates

  • Update Tor Browser to 15.0.7.

  • Simplify the home page of Tor Browser.

  • Update the Tor client to 0.4.9.5.

  • Update Thunderbird to 140.7.1.

  • Install Thunderbird as additional software to improve its security, if you have both the Thunderbird Email Client and Additional Software features of the Persistent Storage turned on.

    Until Tails 7.5, a new version of Thunderbird was released by Mozilla only a few days after we released a new version of Tails. As a consequence, the version of Thunderbird in Tails was almost always outdated, with known security vulnerabilities.

    By installing Thunderbird as additional software, the latest version of Thunderbird is installed automatically from your Persistent Storage each time you start Tails.

    If the Thunderbird Migration dialog below appears when you start Thunderbird, it means that Tails successfully installed Thunderbird as additional software.

    Thunderbird Migration: Tails installed Thunderbird as additional software to improve its security.

  • Include the language pack for Mexican Spanish in Thunderbird in addition to the language pack for Spanish from Spain.

For more details, read our changelog.

Get Tails 7.5

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.5.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.5 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.5 directly:

26 February, 2026 12:00AM

February 25, 2026

hackergotchi for VyOS

VyOS

VyOS Stream 2026.02 is available for download

Hello, Community!

VyOS Stream 2026.02 is available for download now. It features multiple backports from the rolling release, including TLS support for syslog, NAT66 source groups, IPFIX support in VPP, FRR and VPP updates, and over fifty bug fixes. It also makes the VPP configuration subsystem use DPDK as the default driver for NICs that support it and fall back to XDP automatically if needed — there is no need to and no option to configure the driver by hand anymore.

25 February, 2026 12:32PM by Daniil Baturin (daniil@sentrium.io)

February 24, 2026

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.3.1-35 Released

This release of Clonezilla live (3.3.1-35) includes major enhancements and bug fixes.

ENHANCEMENTS AND CHANGES SINCE 3.3.0-33

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2026/Feb/20).
  • The Linux kernel was updated to 6.18.9-1.
  • Partclone was updated to 0.3.45.
  • Implemented mechanisms for cloning 4kn disks to 512n/e disks and 512n/e disks to 4kn disks. Thanks to john (zx1100e1).
  • Improved functions do_ntfs_512to4k_fix and do_ntfs_4kto512_fix by updating Total Sectors (Offset 40) for NTFS.
  • Added a new program, ocs-pt-512-4k-convert, to convert 512B to 4kn partition tables.
  • Rewrote ocs-expand-gpt-pt to be more robust, including 512B to 4kn conversion if mismatched sectors are detected.
  • Added a mechanism to change the master key from a LUKS header. Thanks to nbergont for the contribution.
  • Rewrote ocs-get-nic-fw-lst to retrieve firmware lists directly from Linux kernel modules.
  • Added two more info files in image dir: fdisk.list and blkdev.json. Thanks to arij for the suggestion.
  • Included makeboot64.cmd instead of makeboot64.bat in the live system. Thanks to Tom Hoar.
  • Improved BitLocker support: partitions now work on the clone server (ocs-onthefly), and the system now prompts for passwords again if entered incorrectly. Thanks to Marcos Diez.
  • Enabled the "-edio" (Direct I/O) option in the TUI by default.
  • Restricted the "-smtd" and "-smmcb" options enabled by default to non-x86-64 machines.
  • Added the 'lsb-release' package to the live system.
  • The time synchronization mechanism can now be disabled if ocs_time_sync="no" is assigned in the boot parameters.
  • Updated Brazilian Portuguese translation. Thanks to Rafael Fontenelle.

BUG FIXES

  • Fixed an issue in function udp_send_part_img_or_dev where multicast sending from raw devices failed for partitions with unknown filesystems; it now utilizes partclone.dd.
  • Fixed a bug where ocs-get-dev-info failed to identify extended partitions on MBR disks.
  • Removed extra LVM and LUKS information in dev-fs.list that previously caused partition order errors and failed network clones via ocs-onthefly. Thanks to kokutou kiritsugu for reporting this.
  • Resolved a bug related to restoring MTD/eMMC devices.
  • Appended '--rescue' to partclone options to bypass mtdblock read errors.
  • Fixed a bug where ocs-sr could not find devices using a PTUUID.
  • Updated ocs-live-run-menu to set the TERM as fbterm so box-drawing characters display correctly. Thanks to ottokang for identifying this.
  • Improved ocs-blk-dev-info efficiency and removed double quotation marks from model and serial outputs to prevent menu breakage. Thanks to pete-15.
  • Updated ocs-cvt-dev to avoid name collisions during the conversion process.

24 February, 2026 11:36AM by Steven Shiau

February 23, 2026

hackergotchi for Univention Corporate Server

Univention Corporate Server

Nubus for Kubernetes 1.17: Release Highlights

With this blog post, I am starting a new series in which I present the updates of the roughly monthly Nubus for Kubernetes releases. We begin with a look back at version 1.17, which was released at the end of January and brings many improvements for Nubus operators – including the new Structured Logging format for Kubernetes.

Structured Logging

Since version 1.17, Nubus for Kubernetes offers a new output format for log entries: Structured Logging. This uses the open standard logfmt and generates log outputs that are easy to process both for humans and log analysis tools.

This makes auditing and monitoring in well-known log analysis tools such as the ELK Stack or Grafana Loki significantly easier. Nubus sends the log entries directly to these or other analysis tools available in the data center, where they can be evaluated together with information from other software solutions.

Details on the log format can be found in the release notes and will also be documented in the Nubus Manual in the future.

Moving Away from ingress-nginx

The Ingress in a Kubernetes cluster is responsible for managing external access to the services running inside. It primarily acts as a reverse proxy for HTTP connections and for HTTPS encryption. This Kubernetes component is also modular, allowing operators to choose between different implementations.

Currently, Nubus for Kubernetes in the delivered Helm charts only supports the ingress-nginx implementation. This was long the standard but recently an end of it’s maintenance has been announced. Therefore, operators are forced to switch to other Ingress solutions.

With version 1.17, the dependency on ingress-nginx has been reduced, enabling the use of other implementations in the future. With the upcoming release 1.18 all dependencies will be removed and Nubus will be tested with traefik and HA-Proxy Ingress.

UDM and Provisioning Move Closer Together

The Provisioning component of Univention Nubus ensures that changes from the Univention Directory Manager (UDM), such as new users or groups, are passed on to other systems. Previously, provisioning used its own library, the so-called Transformer, to convert data from the directory service into the Nubus data model.

In version 1.17, this functionality was integrated directly into the UDM REST API. This means that the data model is now consistent throughout, complexity is reduced, and errors caused by different implementations are avoided. For operators, this means more reliable processes with less maintenance effort.

Updates, Updates, Updates

With each release of the Nubus for Kubernetes container images, the underlying open-source software is also updated. Version 1.17 therefore brings numerous small bug fixes and security updates. All details can be found in the release notes.

Der Beitrag Nubus for Kubernetes 1.17: Release Highlights erschien zuerst auf Univention.

23 February, 2026 03:04PM by Ingo Steuwer

February 20, 2026

Sovereign IT with Open Source: How to Build Your Own Modular Application Stack

Digital independence begins with IT architecture: organizations that want to operate IT services sovereignly need open standards, centralized identity management, and full control over users, roles, and access. This article describes how IAM becomes the solid foundation of a modular application stack – flexible, secure, future-proof, and easier than you might think.

Digital sovereignty is more than an internet hype or marketing slogan. It determines whether organizations can shape their processes themselves – or remain dependent on vendors, licensing models, and proprietary interfaces. The good news: by relying on open standards and open source, a wide variety of IT services can be seamlessly integrated – from file servers to specialized applications – allowing the step-by-step construction of a software stack optimized for one’s own needs, benefiting from its modularity and gaining independence from hyperscaler services.

For applications and services to work smoothly together, a connecting element is required: an Identity & Access Management (IAM) system that handles the integration and management of users, roles, and permissions – securely linking all applications and enabling cross-application data flows. Such an IAM manages user identities, regulates access rights, and allows the automation of entire process chains.
In short: without IAM, there is no application stack – at least not one that remains controllable in the long term.

Below, I will introduce central building blocks for a technically sound implementation of an open IAM and show which standards have proven effective and which architectural decisions make the difference – from single sign-on with OpenID Connect or SAML, single logout via frontend or backchannel logout, user lifecycle management with SCIM, a single source of truth for roles and contextual control of permission assignment, automation and deployment with Kubernetes & Helm, to provisioning.

IAM as the Foundation of a Sovereign Application Stack

Organizations that want to operate an application stack themselves need more than containers, computing resources, and a colorful mix of (open-source) components. The challenge lies in a well-thought-out architecture: how do all services interact cleanly – and how is the overview of access, roles, and data flows maintained?

This is exactly where Identity & Access Management (IAM) comes into play. As versatile as modern applications are, without centralized identity and access management, shadow identities, fragmented roles, and security gaps arise. Without overarching IAM, organizations sooner or later face the same problems as with traditional silo solutions: login credentials circulate via email, former employees retain unintended access to sensitive data, and no one really knows who has which rights. An open IAM solves these problems based on established protocols and interfaces.

A solid foundation alone does not yet make a secure house. The right building blocks are also needed – starting with authentication.

 

Diagram 1: Modular infrastructures managed centrally via a central Identity & Access Management with standardized interfaces

Building Block 1: Single Sign-on and Single Logout

A modern application stack often includes services such as file storage, webmail, video conferencing, office and project management software, or industry-specific applications. To prevent login from becoming increasingly complex and time-consuming for users across these applications, a central authentication mechanism is needed: single sign-on (SSO).

With SSO, users log in once and gain access to all connected services. Authentication is carried out using protocols such as OpenID Connect (OIDC) or SAML. Both are widely adopted open standards supported by almost all modern web applications.

SSO is convenient – but it only solves half the problem. While logging in usually works smoothly, logging out often falls short. Single logout (SLO) means that logging out once also closes all sessions in the connected services. In practice, this step is often overlooked – with consequences for security and data protection. Logging out of the email client does not automatically terminate sessions in the video conference service, file storage, or project platform – a potential avenue for misuse.

Depending on the protocol used, different single logout methods exist – for example, frontend logout, where the browser actively terminates all sessions, or backchannel logout, where the IAM communicates directly with connected services. Which method is possible depends on the capabilities of the respective application and the care taken in technical integration.

Reality shows: while SSO is often quickly hailed as a success, SLO is the real challenge. A missing logout mechanism may seem harmless, but it can become a security risk – especially with sensitive data or public workstations.

Building Block 2: User Lifecycle Management

Single sign-on enables convenient access to IT services, but what happens before and after? Proper user account management requires monitoring the entire lifecycle of identities: from creating new users or groups, through changes, to deletion. This is what user lifecycle management is about.

Many systems rely solely on the login event: an account is automatically created when someone logs in for the first time. This may suffice for simple scenarios – but for controlled, traceable IT management, this model is insufficient. What happens if someone never logs in? Or if a person leaves the organization?

Without centralized event control, shadow identities – accounts existing in the system but no longer linked to a person – emerge quickly. Rights and group memberships are difficult to synchronize, too. Losing oversight is not just an organizational issue but also a data protection and security problem.

For technical implementation, several approaches exist:

  • Via APIs – whether open or proprietary – data can often be written directly to target systems. This offers flexibility but usually comes with high integration effort and low reusability.
  • Directory services (e.g., LDAP-based) can serve as a shared source for user and group data but generally only work with systems that actively access them.
  • System for Cross-domain Identity Management (SCIM) is an open standard for provisioning identity data. It allows events like account creation, name changes, or deletion to be transmitted automatically and standardized between systems – including groups and permissions.

Comprehensive user lifecycle management with SCIM or a comparable mechanism is not only more convenient but also safer. It prevents data remnants, reduces errors, and allows identities to be managed consistently across system boundaries – regardless of the size or heterogeneity of the stack.

 

If you would like to explore the topic of User Lifecycle Management in more depth and learn how our IAM solution Nubus enhances security, efficiency, and compliance in schools and enterprises, you can find further insights in this article: https://www.univention.com/blog-en/2025/10/user-lifecycle-management-nubus/

Diagram 2: Manage digital identities centrally with Nubus and provide them across applications

Building Block 3: Permissions and Roles

Access alone is not enough – equally important is the question: what is a user allowed to do within an application? An open IAM must not only manage identities but also assign and control permissions in a differentiated manner. Groups, roles, and permissions must be represented so they can be automatically transferred to different applications. Many IAM systems rely on role models assigning certain rights to user groups – e.g., “teacher,” “employee,” “project manager,” or “admin.”

Technically, there are two common approaches:

  • OIDC allows roles and permissions to be included in claims, which contain information such as groups, role names, or attributes – e.g., role=project-admin. This works well but requires IAM and applications to agree on what each term means. It is not standardized.
  • SCIM goes a step further: it explicitly defines how entitlements and roles can be provisioned. Groups and rights can be transferred and kept consistent across systems – provided the target application fully supports SCIM. Like OIDC, IAM and applications must agree on the meaning of entitlements.

In practice, limits are quickly reached. Many applications interpret claims differently or ignore entitlements entirely. To be safe, a clear architectural decision is required: there must be a leading instance where users, roles, and permissions are managed – a single source of truth. An IAM ideally fulfills this role: storing rights centrally, synchronizing them with other systems, and remaining independent of specific applications.

In addition to static role-based permissions, contextual control is gaining importance: who can access what, when, where, and from which device – modern IAM systems can represent these conditions in a granular manner. This results in a permissions model that is not only differentiated but flexible enough for hybrid scenarios.

Automation and Deployment with Kubernetes & Helm

Organizations that want to reliably operate open components like IAM, directory services, office applications, or file storage need a platform that enables repeatable deployments, updates, and integrations – even in running operations.

Kubernetes has become the standard in many organizations for operating containerized applications scalably and resiliently. Combined with the Helm package manager, complex setups like an IAM with associated services can be described declaratively, installed automatically, and reproduced as needed – e.g., in test, integration, and production environments.

In self-built application stacks, this is essential: without automated deployment, releases quickly become confusing, configurations inconsistent, and extensions risky. Kubernetes and Helm provide structure and make it easier to operate IAM modularly, update it regularly, and integrate it traceably. Prerequisite for this repeatability is a consistent continuous delivery approach. Only when builds, tests, and deployments are standardized and automated can the quality of the overall system be reliably ensured – across many components.

Secure Integration of Existing Systems

Many organizations already have established infrastructures – e.g., a central Active Directory for user management. Achieving digital sovereignty does not require starting from scratch; existing systems can be cleverly integrated.

When introducing an open IAM, it is often sensible to include existing directories initially and import or synchronize identity data from them. In hybrid scenarios, the new IAM can become the leading system or run in parallel to the existing directory, gradually replacing it step by step.

Technically, several options exist: manual exports can be implemented quickly but are error-prone and unsustainable. Better are connections via LDAP, SCIM, or API, where changes in user data are retrieved automatically or event-driven. Crucial is that processes like onboarding, role assignment, and offboarding work seamlessly – regardless of where the data is maintained.

In practice, a gradual transition is often advisable: existing systems remain initially, while new components are integrated and tested according to standards. This allows a controlled change – without loss of functionality but with growing control.

Open Source in Practice: Examples of Sovereign IT

A modular application stack consists of more than individual services – the interaction between them is decisive. Only when identities, roles, applications, and data flows are integrated via open standards does an architecture emerge that can be operated independently, adapted, and sustainably maintained. That this is practicable even in very large environments is demonstrated by the project openDesk, promoted by the Zentrum für Digitale Souveränität.

openDesk relies on a combination of proven open-source components: Open-Xchange for email and calendar, Nextcloud for files, Collabora for online document editing, Element for messaging, OpenProject for project management – and the IAM Nubus from Univention GmbH, which as a central link enables the technical interaction of the components and convenient access to all services via a modern portal. The modular setup works – for example at the Robert Koch-Institut and in the Bundestag administration.

Nubus demonstrates the important role an open IAM can play: it not only manages identities but also centrally controls access rights and provides this information via open interfaces. An example is the ambitious open-source project of the state of Schleswig-Holstein, which is currently building a new statewide directory service with Nubus. This will eventually replace the previous Active Directory environment and enable secure access to specialized applications, devices, and central IT systems – role-based, data protection compliant, and fully under local control.

 

Diagram 3: Modular infrastructures illustrated using the openDesk case study

To get started with a sovereign cloud infrastructure using Nubus as IAM, pre-configured integrations are helpful. For example, the Active Directory connection allows an open IAM to synchronize with an existing AD – including accounts, groups, and passwords. There are also ready-made integration packages for complete applications such as Nextcloud or Open-Xchange, which make connecting third-party software to Nubus particularly easy. Further packages are being developed and gradually released. Connector tools for Google Workspace, Apple School Manager, or Microsoft 365 also support single sign-on to these cloud services. These tools facilitate a smooth transition to open, controllable architectures – without having to replace everything immediately.

These projects show: digital sovereignty is not an abstract goal but can be concretely implemented with open source – step by step, traceably, and sustainably.

Conclusion: IAM as the Key to Digital Sovereignty

Digital sovereignty is not created by a product label but by an architecture that remains open, controllable, and adaptable. Organizations that want to operate IT services themselves or integrate them into existing structures need an IAM that grows with them – from authentication through roles and permissions to full automation.

Many organizations build their own IAM systems from LDAP, scripts, and database reconciliations in hopes of maximum flexibility. Such homemade solutions quickly hit limits: lack of logging, poor scalability, security risks. Using an established open-source IAM, by contrast, offers tested standards, community support, and extensibility – without technical debt.

A sovereign IT stack needs more than containers and services. Only with a central IAM can identities, roles, and access rights be reliably managed – across all applications. It is the connecting element that turns individual parts into a functional whole: interoperable, controllable, and sustainably maintainable.

This article has already been published in Informatik Aktuell and can be viewed here.

 

Do you want to ensure your digital operational capability even in emergency situations and keep critical IT services running reliably?
With “Nubus for Business Continuity”, a prepared, parallel IAM runs in standby mode, ready to take over immediately in a crisis and maintain access to applications, systems, and data. Learn how a sovereign IAM strategy can help you reduce risks and strengthen your IT resilience here: https://www.univention.com/solutions/nubus-for-business-continuity/

Der Beitrag Sovereign IT with Open Source: How to Build Your Own Modular Application Stack erschien zuerst auf Univention.

20 February, 2026 07:09AM by Ingo Steuwer

February 18, 2026

hackergotchi for Purism PureOS

Purism PureOS

Privacy Under Siege

Surveillance, Breaches, and Gaps in the Law It has become clear that privacy risks are not isolated incidents. They are part of a larger pattern.  Major organizations continue to experience large scale data breaches. Brightspeed recently suffered a breach affecting around one million customers. Brightspeed opened an internal cybersecurity investigation in early January this year, after Crimson […]

The post Privacy Under Siege appeared first on Purism.

18 February, 2026 04:55PM by Purism

February 16, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky 8.2

There is the second update available for Sparky 8 – 8.2. This is a quarterly update of the Sparky 8 “Seven Sisters” stable release. Sparky 8 is based on and fully compatible with Debian 13 “Trixie”. Main changes: – All packages updated from the stable Debian and Sparky repositories as of February 14, 2026. – Linux kernel: 6.12.69-LTS (6.19.1, 6.12.72 LTS, 6.6.125-LTS in sparky repositories) …

Source

16 February, 2026 01:52PM by pavroo

February 14, 2026

hackergotchi for Deepin

Deepin

February 13, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

32 bit support will end with BunsenLabs Boron

Debian - on which BunsenLabs is based - have dropped 32 bit kernels, installers and iso images from the current stable Trixie release.

BunsenLabs will be forced to do likewise, and the upcoming Carbon release will have no 32 bit iso images, or 32bit package repositories.

Users with 32 bit machines can continue to use BunsenLabs Boron for as long as Debian Long Term Support for Bookworm continues, which is expected to be until June 30, 2028:
https://wiki.debian.org/LTS

Previous discussion:
https://forums.bunsenlabs.org/viewtopic … 48#p140748

13 February, 2026 12:00AM

February 11, 2026

hackergotchi for GreenboneOS

GreenboneOS

January 2026 Threat Report: Off to a Raucous Start – Part 2

So far, 2026 is off to a raucous start. With so much activity in the software vulnerability landscape it’s easy to understand the concerns of global executives discussed in Part 1 of the January 2026 Threat Report. This volatility also highlights the value of Greenbone’s industry-leading detection coverage. In Part 2 of the January Threat […]

11 February, 2026 10:20AM by Joseph Lee

hackergotchi for Deepin

Deepin

hackergotchi for Tails

Tails

Tails 7.4.2

This release is an emergency release to fix critical security vulnerabilities in the Linux kernel.

Changes and updates

  • Update the Linux kernel to 6.12.69, which fixes DSA 6126-1, multiple security vulnerabilities that could allow an application in Tails to gain administration privileges.

    For example, if an attacker was able to exploit other unknown security vulnerabilities in an application included in Tails, they might then use DSA 6126-1 to take full control of your Tails and deanonymize you.

    This attack is very unlikely, but could be performed by a strong attacker, such as a government or a hacking firm. We are not aware of this attack being used in practice.

  • Update Thunderbird to 140.7.1.

Fixed problems

  • Fix opening the Wi-Fi settings from the Tor Connection assistant. (#18587)

  • Fix reopening Electrum when it was not closed cleanly. (#21390)

  • Fix applying the language saved to the USB stick in the Welcome Screen. (#21383)

For more details, read our changelog.

Get Tails 7.4.2

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.2.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4.2 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4.2 directly:

11 February, 2026 12:00AM

February 06, 2026

hackergotchi for Qubes

Qubes

Fedora 43 templates available for Qubes OS 4.3

The following new Fedora 43 templates are now available for Qubes OS 4.3:

  • fedora-43-xfce (default Fedora template with the Xfce desktop environment)
  • fedora-43 (alternative Fedora template with the GNOME desktop environment)
  • fedora-43-minimal (minimal template for advanced users)

Note: Fedora 43 template availability for Qubes OS 4.2 will be announced separately.

There are two ways to upgrade a template to a new Fedora release:

  1. Recommended: Install a fresh template to replace an existing one. This option is simpler for less experienced users, but it won’t preserve any modifications you’ve made to your template. After you install the new template, you’ll have to redo your desired template modifications (if any) and switch everything that was set to the old template to the new template. If you choose to modify your template, you may wish to write those modifications down so that you remember what to redo on each fresh install. To see a log of package manager actions, open a terminal in the template and use the dnf history command.

  2. Advanced: Perform an in-place upgrade of an existing Fedora template. This option will preserve any modifications you’ve made to the template, but it may be more complicated for less experienced users.

Note: No user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).

06 February, 2026 12:00AM

February 05, 2026

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for January 2026

Learn more about deepin details, historical versions, user reviews, etc.: https://distrowatch.com/table.php?distribution=deepin I. Overview of Community Data for January 2026 II. Community Products 2.1 Release of deepin 25.0.10 Version Image: Comprehensive Optimization of Installation Experience and System Interaction In January 2026, deepin officially released the deepin 25.0.10 system image, focusing on experience upgrades for the installation process, file management, and system interaction. Optimized System Installation Experience: Enhanced data formatting prompts during full-disk installation, supporting the retention of user data and reuse of original account configurations, simplifying system migration and upgrade processes. Improved File Manager Efficiency: Added features such as automatic scrolling during file drag-and-drop, ...Read more

05 February, 2026 10:14AM by xiaofei

hackergotchi for GreenboneOS

GreenboneOS

January 2026 Threat Report: Off to a Raucous Start

So far, 2026 is off to a raucous start. The number of critical severity vulnerabilities impacting widely deployed software is staggering. Defenders need to scan widely and scan often to detect new threats in their infrastructure and prioritize mitigation efforts based on the potential impact to business operations, privacy regulations, and other compliance responsibilities. Defenders […]

05 February, 2026 07:04AM by Joseph Lee

February 04, 2026

hackergotchi for Purism PureOS

Purism PureOS

Sim Swap Attacks Surging

SIM swap attacks are skyrocketing. A SIM Swap attack is when cybercriminals hijack mobile numbers by convincing carriers to transfer a victim’s phone number to a SIM card they control. Once successful, attackers intercept text-based authentication codes, unlocking access to cryptocurrency wallets, banking apps, and social media accounts.

The post Sim Swap Attacks Surging appeared first on Purism.

04 February, 2026 06:26PM by Purism

hackergotchi for ZEVENET

ZEVENET

When Open Source Infrastructure Stops Being Easy to Operate

Open Source infrastructure is often a deliberate and well-reasoned choice. It offers transparency, control and a level of flexibility that fits well with how many engineering teams like to build and operate systems. Deploying an open source load balancer or reverse proxy is usually a conscious decision, backed by solid documentation, community knowledge and proven behavior in production.

In most cases, it performs exactly as expected. Configuration is understandable, behavior is predictable and the system feels under control.

The challenge does not appear at deployment time. It emerges later, as traffic increases, environments expand and the same platform has to support more services, more changes and more operators. Configuration grows, operational tasks multiply and the margin for error narrows. Changes that were once straightforward start requiring coordination, validation and caution.

At that stage, the problem is not the software itself. The difficulty lies in operating open source infrastructure reliably as the system grows and operational demands increase.

An open-source load balancer in a growing environment

At this stage, most teams know the technology well. They trust Open Source and often run mature projects like HAProxy, NGINX, Apache, or even the SKUDONET Community Edition. These tools are proven, fast and predictable, and they give administrators full control over how traffic is handled.
As the environment grows, friction starts to appear:

  • A single configuration evolves into multiple files spread across environments
  • Changes require coordination across teams and systems
  • Visibility relies on logs that are not always centralized or easy to correlate
  • Updates and patches must be planned, tested and rolled out manually
  • High-availability setups work, but upgrading them without disruption becomes increasingly difficult

Security adds more pressure. Rules, ACLs or WAF logic exist, but tuning them safely takes effort. When something goes wrong, it is not always clear whether the issue comes from configuration, traffic patterns or the infrastructure itself.

None of this breaks the system. But it slows it down operationally. The load balancer still works, yet running it demands more time, more care and more experience than before. This is usually when teams start questioning whether relying only on community tooling is still the right model for their current scale.

The natural next step: teams start looking beyond community tools

When this point is reached, teams know what is not working and they start by looking around the ecosystem they already trust. Users of HAProxy, NGINX or Apache usually do not want to replace their stack. Instead, they evaluate the commercial or enterprise options built around the same technologies, expecting easier operation, better visibility and safer upgrades.
These editions typically promise:

  • centralized management
  • technical support
  • safer update and upgrade processes
  • additional security capabilities

The problem is that this promise does not always translate into simpler operations. Some enterprise versions keep much of the same operational complexity as the community tools, with configuration-heavy workflows and limited abstraction. Others introduce pricing models that grow quickly with traffic and environments, or platforms that are technically powerful but harder to operate on a daily basis.

SKUDONET Enterprise as the natural evolution from Open Source

SKUDONET Enterprise is designed to remove the operational friction that appears when Open Source infrastructure grows.

Configuration, traffic control and visibility are handled from a single plane, instead of being spread across files, nodes and environments. This reduces the effort required to introduce changes and lowers the operational risk.

In practice, this translates into:

  • Centralized management and visibility, without losing control over traffic behavior or routing logic
  • Simpler operations, where updates, high availability and scaling do not rely on complex or fragile maintenance workflows
  • Security that remains manageable, with clear insight into how rules behave and how traffic is affected
  • Operational continuity, even as environments, traffic volume and teams evolve

High availability, updates and maintenance are treated as part of the platform, not as separate projects that require careful coordination. Routine tasks no longer depend on manual processes or deep system-specific knowledge to be executed safely.

Integration remains straightforward. Existing architectures and deployment models stay in place, allowing teams to add Enterprise capabilities without redesigning their stack or introducing heavy control layers.

Pricing stays predictable as environments scale, avoiding the cost escalation and licensing complexity commonly associated with traditional commercial editions.
The result is a platform that preserves the technical foundations teams trust, while making infrastructure easier to operate, easier to maintain and easier to scale.

If you want to evaluate how this approach works in practice, you can try SKUDONET Enterprise with a 30-day demo and validate the fit in your own environment.

04 February, 2026 01:18PM by Nieves Álvarez

hackergotchi for Deepin

Deepin

(中文) 先进生产力:在 deepin 25 上装 OpenClaw 接飞书

Sorry, this entry is only available in 中文.

04 February, 2026 10:05AM by xiaofei

February 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

BunsenLabs Carbon Release Candidate 3 iso available

BunsenLabs Carbon Release Candidate 3 iso is available here: https://sourceforge.net/projects/bunsen … hybrid.iso https://sourceforge.net/projects/bunsen … iso.sha256

sha256 sum: 47de769531fc0c99d9e0fa4b095ff280919684e5baae29fe264b9970e962a45f

Unless unexpected bugs come up, this should be the same as the Official Release of Bunsenlabs Carbon.

If you do find a new bug related to the Carbon RC3 iso, please post it in the Bug Reports section, adding a tag [Carbon RC3].

03 February, 2026 12:00AM

February 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2026/01

The 1st monthly Sparky project and donate report of the 2026: – Linux kernel updated up to 6.18.8, 6.12.68-LTS, 6.6.122-LTS – Added new desktop to Sparky testing (9): Labwc – Sparky 2026.01~dev2 Labwc released – changed ‘firefox-sparky’ packaga name to ‘firefox-latest’ Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive.

Source

01 February, 2026 07:26PM by pavroo

January 31, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: apt, SHA-1 keys + 2026-02-01

You might have seen Policy will reject signature within a year warnings in apt(-get) update runs like this:

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Get:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]
Fetched 5326 B in 0s (43.2 kB/s)
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details

root@424812bd4556:/# apt --audit update
Hit:1 http://foo.example.org/debian demo InRelease
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
All packages are up to date.    
Warning:  http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
Audit:  http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
   Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:
              No binding signature at time 2024-06-19T10:33:47Z
     because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
     because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
Audit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sources
Audit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'
Audit: Consider migrating all sources.list(5) entries to the deb822 .sources format
Audit: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Audit: See apt-secure(8) for best practices in configuring repository signing.
Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.

If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you’re reading this because you’re already affected).

Let’s simulate the future:

root@424812bd4556:/# apt --update -y install faketime
[...]
root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" 
root@424812bd4556:/# date
Sat Aug 29 23:42:11 UTC 2026

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease                                 
Err:1 http://foo.example.org/debian demo InRelease
  Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
root@424812bd4556:/# echo $?
100

Now, the proper solution would have been to fix the signing key underneath (via e.g. sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed).

If you don’t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you’re out of luck for a proper fix.

But there’s a workaround for the apt situation (related see apt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):

root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config
# Default APT Sequoia configuration. To overwrite, consider copying this
# to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the
# desired values.
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-02-01    # Extend the expiry for legacy repositories
sha224 = 2026-02-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Adjust this according to your needs:

root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/

root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config
# APT Sequoia override configuration
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-09-01    # Extend the expiry for legacy repositories
sha224 = 2026-09-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Then we’re back into the original situation, being a warning instead of an error:

root@424812bd4556:/# apt update
Hit:1 http://deb.debian.org/debian trixie InRelease
Get:2 http://foo.example.org/debian demo InRelease [4229 B]
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
[..]

Please note that this is a workaround, and not a proper solution.

31 January, 2026 01:57PM

January 30, 2026

hackergotchi for Deepin

Deepin

Urgent Security Update | OpenSSL Multiple Vulnerabilities Fixed, Please Upgrade ASAP!

🔔 Dear deepin Users and Community Members, Recently, OpenSSL has released multiple security vulnerability fix announcements, involving 13 security vulnerabilities, including 2 High/Medium-risk vulnerabilities. To ensure the security of your system, we strongly recommend all users upgrade the relevant packages as soon as possible.   I. Vulnerability Information The CVE identifiers involved in this fix are as follows: CVE-2025-9230, CVE-2025-9231, CVE-2025-9232, CVE-2025-15467, CVE-2025-15468, CVE-2025-66199, CVE-2025-68160, CVE-2025-69418, CVE-2025-69419, CVE-2025-69420, CVE-2025-69421, CVE-2026-22795, CVE-2026-22796   Key High/Medium Risk Vulnerability Fixes CVE-2025-15467 | High CMS AuthEnvelopedData Parsing Stack Buffer Overflow: This vulnerability could lead to Remote Code Execution (RCE) under specific conditions. Immediate updating ...Read more

30 January, 2026 10:05AM by xiaofei

hackergotchi for VyOS

VyOS

VyOS Project January 2026 Update

Hello, Community! The belated development update for December 2025 and January 2026 is finally here.

We are getting closer to the 1.5 release but there's also quite a bit of work towards the future. In particular, there's good progress towards replacing the old configuration command completion mechanism with a VyConf-based equivalent, which will allow us to get rid of legacy command definition files eventually.

More immediate improvements include certificate-based authentication for OpenConnect, new operational commands for VPP, support for configuring watchdog timers, and multiple bug fixes.

30 January, 2026 09:00AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Tails

Tails

Tails 7.4.1

This release is an emergency release to fix critical security vulnerabilities in OpenSSL, a network encryption library used by Tor.

Changes and updates

Included software

  • Update the OpenSSL library to 3.5.4, which fixes DSA 6113-1, a set of vulnerabilities that could be critical. Using this set of vulnerabilities, an malicious Tor relay might be able to deanonymize a Tails user.

    We are not aware of these vulnerabilities being exploited in practice.

  • Update the Tor client to 0.4.8.22.

  • Update Thunderbird to 140.7.0.

Fixed problems

  • Fix Gmail authentication in Thunderbird. (#21384)

  • Add a spinner when opening the Wi-Fi settings from the Tor Connection assistant. (#18594)

For more details, read our changelog.

Known issues

The homepage of Tor Browser incorrectly says you are still using Tails 7.4, even after you have upgraded to 7.4.1. It also links to the release notes for that older version.

If in doubt, to verify that you are using Tails 7.4.1, choose Apps ▸ Tails ▸ About Tails.

Get Tails 7.4.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4.1 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4.1 directly:

30 January, 2026 12:00AM

January 29, 2026

hackergotchi for ZEVENET

ZEVENET

Why multi-tenant proxies make security decisions harder for applications

In recent weeks, several incidents surfaced where content providers blocked traffic coming from multi-tenant proxies to stop automated attacks or illegal rebroadcasting. The countermeasure reduced the offensive surface, but also denied access to legitimate users travelling through the same channel. It illustrates a common issue: upstream security — security applied at proxies, CDNs or scrubbing centers before traffic reaches the application — does not always retain the context required to make good decisions.

The relevant point is not the individual incident, but what it exposes: when security runs upstream and multi-tenant, the backend loses semantics, session state and part of the operational timeline. This alters how attacks are detected, how they are mitigated, and how user continuity is preserved.

The issue is not that these proxies “fail”, but that their efficiency relies on sharing channel, capacity and enforcement across thousands of customers. The model optimizes cost and scale, but erodes signals that were historically essential for security and operations: origin, semantics, persistence and temporal correlation. Once those signals disappear, security stops being a purely defensive problem and becomes an operational decision problem.

Shared-proxy architectures and their operational trade-offs

Multi-tenant proxies — Cloudflare being the most visible reference — terminate TLS, filter bots, apply WAF rules, absorb DDoS and optimize latency before forwarding requests to the backend. Operationally, the model offers:

  • shared scale
  • economic amortization
  • simplified management

The problem emerges in the least visible layer: traffic identity. When thousands of customers share the same defensive channel, the IP address no longer represents a user, it represents the proxy. For the backend, origin stops being an identity signal and becomes a collective. Attackers, legitimate users and corporate SSO traffic exit through the same door.

Traditional web security largely assumed origin was enough to make decisions. In a multi-tenant model, that signal degrades and the system no longer separates legitimate from abusive behavior with the same clarity.

At that point the decision collapses to two choices:

  • block the channel → stops the attack but penalizes legitimate users
  • allow the channel → preserves continuity but lets part of the attack through

The difficulty is not having two options, but having to choose with incomplete information. That is where the multi-tenant model shows its real cost: it gains efficiency but loses context.

How upstream filtering fragments application context

Context loss is not just about hiding origin or masking IP. In production it appears across multiple planes, and — importantly — not in the same place nor at the same time. This fragments the operational timeline, weakens signals and complicates defensive decision-making.

TLS plane

When TLS negotiation and establishment happen before reaching the application, the backend stops seeing signals that do not indicate attack but do indicate degradation of legitimate clients, such as:

  • renegotiation attempts
  • handshake failures
  • client-side timeouts
  • cipher downgrades
  • inconsistent SNI

During brownouts or incident response, these signals matter because they describe the real client, not the attacker. In a multi-tenant proxy, that degradation disappears and the application only sees “apparently normal” HTTP. For continuity and SLO compliance, that information is lost in the wrong plane.

WAF plane

When filtering occurs before the application — at a proxy or intermediary — another effect appears: the backend sees the symptom but not the cause.
The real circuit is:

Request → WAF/Proxy → Block → END

but for the backend it becomes simply: less traffic

Without correlation between planes, root-cause analysis becomes unreliable. A drop in requests may look like failure, user abandonment or load pressure when it is in fact defensive blocking.

Session plane

In modern architectures, user state does not live in the connection but in the session: identity, role, flow position and transactional continuity. When session lives in a proxy or intermediary layer, the backend loses persistence and affinity. In applications driven by login, payment or transactional actions, this is critical.

The symptoms do not resemble an attack; they resemble broken UX:

  • unexpected logouts
  • interrupted payments
  • inconsistent login flows
  • failover correct from infrastructure perspective but wrong from user perspective

A typical case where infrastructure “works”, but the user churns because the flow cannot complete.

Observability plane

The quietest plane concerns who sees what and when. If logs, metrics and traces stay at the proxy or upstream service, the downstream side — the one closer to application and backend — becomes partial or blind.

Without temporal continuity across planes, the following increase:

  • time-to-detect
  • time-to-mitigate
  • internal noise
  • post-mortem cost

And, more importantly, real-time defensive decisions degrade — precisely where continuity matters.

From origin-based filtering to behavior-based decisions

In recent years, defensive analysis has shifted toward behavior. Where the client comes from matters less than what the client is trying to do. Regular timings, repeated attempts, invalid sequences, actions that violate flow logic, or discrepancies between what the client requests and what the application expects are more stable signals than an aggregated IP.

In short:

Question Traditional signal Relevant signal Defensive value
Where does it come from? IP / ASN / reputation Low (ambiguous in multi-tenant)
What is it trying to do? Behavior / semantics High (context + intent)

Interpreting intent requires three planes that upstream proxies lose by design:

  • session (who and where in the flow)
  • semantics (what action is being attempted)
  • timeline (in what order things occur)

Without those planes, defensive decisions simplify. With them, they can be made precise.

The application-side plane where context actually exists

If context disappears upstream, the question is not “remove the proxy”, but locating where the information lives that distinguishes abuse from legitimate use. That information only exists where three things converge:

  • what the user does
  • what the application expects
  • what the system allows

That point is usually the application or the component immediately before it (typically an ADC or integrated WAF), where session, semantics, protocol, results and transactional continuity coexist.

A practical example:

login() → login_failed() → login_failed() → login_failed()

vs:

login() → 2FA() → checkout() → pay()

For the upstream proxy, both are valid HTTP. For the application, they are different intentions: abuse vs legitimate flow.

What matters here is not “blocking more”, but blocking with context — which in operations becomes the difference between:

  • blocking the channel
  • blocking the behavior

and, in service terms, between losing legitimate users or preserving continuity.

Where SKUDONET fits

SKUDONET operates in that plane closer to the application, without the constraints of the multi-tenant model. The approach is mono-tenant and unified: TLS, session, WAF, load-balancing and observability coexist in the same plane without fragmenting across layers or externalizing identity and semantics.

This has three operational consequences:

1. Origin retains meaning

No aggregation or masking. IP becomes useful again when combined with behavior.

2. Transactional flows maintain continuity

Login, payment, checkout, reservation or any stateful action survives even during active/passive failover.

3. Timeline and semantics correlate

Errors, attempts and results occur in the same place, enabling precise decisions instead of global blocking.

Schematically:

Plane Upstream multi-tenant SKUDONET
Identity Aggregated Individual
Session External Local
Semantics Partial Complete
Observability Fragmented Correlated
Defense Binary Contextual
Continuity Fragile Transactional

From this plane, security stops being “block proxy yes/no” and focuses on blocking abuse while preserving legitimate users.

Conclusion

Multi-tenant proxies solve scale, cost and distribution. But continuity, semantics and intent still live near the application — because it is the only plane where full context exists.

If continuity and application-level context matter to your stack, you can evaluate SKUDONET Enterprise Edition with a 30-day trial.

29 January, 2026 08:23AM by Nieves Álvarez

January 28, 2026

hackergotchi for Deepin

Deepin

January 27, 2026

hackergotchi for Qubes

Qubes

XSAs released on 2026-01-27

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-477
    • This XSA affects only HVMs with shadow paging and tracing enabled. In Qubes OS, shadow paging and tracing are disabled at build time.
  • XSA-478
    • This XSA affects only XAPI, which is an alternative toolstack. Qubes OS uses libxl instead of XAPI.
  • XSA-479
    • This XSA affects only in-VM isolation, which Qubes OS does not rely on for security. We will still provide the fix for this issue at a later date, but it will not be accompanied by a Qubes security bulletin (QSB).

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

27 January, 2026 12:00AM

January 26, 2026

hackergotchi for Deepin

Deepin

hackergotchi for Maemo developers

Maemo developers

Igalia Multimedia contributions in 2025

Now that 2025 is over, it’s time to look back and feel proud of the path we’ve walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.

With more than 459 contributions along the year, we’ve been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.

Pie chart of Igalia's contributions to different areas of the GStreamer project: other (30%) vulkan (24%) validate (7%) va (6%) ges (4%) webrtc (3%) h266parse (3%) python (3%) dots-viewer (3%) tests (2%) docs (2%) devtools (2%) webrtcbin (1%) tracers (1%) qtdemux (1%) gst (1%) ci (1%) y4menc (1%) videorate (1%) gl (1%) alsa (1%)Igalia’s contributions to the GStreamer project

In Vulkan Video we’ve worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There’s now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.

GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.

Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia’s work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.

Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.

Pie chart of Igalia's contributions to different areas of the GStreamer Rust project: vulkan (28%) other (26%) gstreamer (12%) ci (12%) tracer (7%) validate (5%) ges (7%) examples (5%)Igalia’s contributions to the GStreamer Rust project

In addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. From the 1739 contributions to the WebKit project done last year by Igalia, the Multimedia team has made 323 of them. Nearly one third of those have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.

Pie chart of Igalia's contributions to different areas of the WebKit project: Generic Gstreamer work (33%) WebRTC (20%) Regression bugfixing (9%) Other (7%) MSE (6%) BuildStream SDK (4%) MediaStream (3%) WPE platform (3%) WebAudio (3%) WebKitGTK platform (2%) Quirks (2%) MediaRecorder (2%) EME (2%) Glib (1%) WTF (1%) WebCodecs (1%) GPUProcess (1%) Streams (1%) Igalia Multimedia Team’s contributions to different areas of the WebKit project

We’re happy about what we’ve achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.

0 Add to favourites0 Bury

26 January, 2026 09:34AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

BunsenLabs Carbon Release Notes

What's New in BunsenLabs Carbon? ================================

The BunsenLabs Session is now able to launch Wayland sessions, if the necessary apps and configurations are provided. In the near future a "plugin" metapackage will be available to add a base Wayland session to a BL Carbon system.

Several core apps have been changed to ones that support Wayland as well as X11,
or make theming simpler:
  nitrogen > xwwall + feh
  tint2 > xfce4-panel
  lxappearance > nwg-look
  lxterminal > xfce4-terminal
  arandr > lxrandr
  policykit-1-gnome (obsolete) > mate-polkit

These packages have been dropped from the default install:
  xserver-xorg-video-intel (only needed for pre-2007 Intel graphics)
  qt5-style-plugins

picom configs have been substantially updated to use the current picom, which now needs 3D acceleration and openGL.
(actual appearance settings have not been changed much)

A wrapper script has been added for pkexec under Wayland, and sudoedit is now used to edit files as root.
See:
https://forums.bunsenlabs.org/viewtopic … 01#p143401
https://forums.bunsenlabs.org/viewtopic … 42#p144442

A "bl-menu" command has been added so a menu can be started from the same launcher regardless of running on X11 or Wayland.

blob has seen a lot of work, eg:
- added support for saving and restoring xfce4-panel settings via xfconf
- added Carbon-Sage and Carbon-Bark presets
- older presets still use tint2 (it looks nice): users will be prompted to install it if necessary
- picom files in older presets have been updated so current picom does not crash

openbox and labwc config files have been moved to ~/.config/bunsen/openbox and ~/.config/bunsen/labwc
This means the location of the default openbox rc.xml has changed, but users' original ~/.config/openbox/bl-rc.xml will still exist, so they can either just open that and ~/.config/bunsen/openbox/rc.xml side by side and copy across any changes they want to keep, or use a GUI diff app to compare them (eg meld).
(Later, they can remove bl-rc.xml and bl-menu.xml)

xfce4-panel plugin icons are resized to match the panel with entries in ~/.config/gtk-3.0/gtk.css :

/* some buttons are too big */

#pulseaudio-button * { -gtk-icon-transform: scale(0.6); }

#xfce4-power-manager-plugin * { -gtk-icon-transform: scale(0.4); }

#battery-14 * { -gtk-icon-transform: scale(0.6); }
/* adjust the "#14" to match the widget ID of your battery plugin */

If the audio, power-manager-plugin or battery icons look the wrong size,
adjust the scale number to suit your desktop. (The battery icon is hidden by default.)

Two menu items in ~/.config/jgmenu/prepend.csv are commented out:
- Dropbox (bl-dropbox-pipemenu), which helps users to install and use Dropbox
- Choose Language (bl-setlocale), which lets users choose a locale if their login greeter does not offer that option

bl-exit now uses xfce4-screensaver for locking (it works on Wayland too).

BUNSEN_SESSION_TYPE environment variable is set to x11 or wayland and can be used by scripts etc.

XDG_CURRENT_DESKTOP environment variable is set to 'BunsenLabs:XFCE'

bunsen-meta-bluetooth now depends on libspa-0.2-bluetooth for pipewire support.

Apt signing keys are now installed to /usr/share/keyrings but till BL Nitrogen have a symlink from the old location in /etc/apt/trusted.gpg.d

live-build:
- use zstd compression
- ensure grub first boot menu entry shows "BunsenLabs"
- add Signed-By field to sources
- make sure en_US.UTF-8 locale is installed along with user's chosen locale

bl-welcome:
- rewrite welcome screen to take slightly less space (thanks to @sleekmason)
- offer to convert sources to deb822 format
- drop PAE test (no more 32 bit)

Set GTK4 apps to use dark theme by default in gsettings and add some limited support for theme setting:
https://forums.bunsenlabs.org/viewtopic … 80#p147480

CREDITS
=======

As always, many people have contributed, with special credit to:
@hhh
@micko01
@sleekmason
@greenjeans

And thanks to Pawel Czerwinski for the beautiful wallpaper that @hhh has adapted for Carbon!

POSSIBLE ISSUES
===============

1) 32bit isos and packages are not available from BL Carbon because Debian have dropped support.
See: https://forums.bunsenlabs.org/viewtopic … 94#p148894
Users of BL Boron with 32bit systems should not attempt to upgrade to Carbon. Boron will be supported as long as Debian Bookworm, ie until June 2028.

2) NOTE for Virtual Machine users:
If you get an unusable desktop when running Carbon on a VM you may need to disable compositing or enable OpenGL and 3d acceleration.

The compositor, picom 12.5-1, in Debian Trixie and BunsenLabs Carbon, requires OpenGL and 3D acceleration to work properly.

If your BunsenLabs Carbon desktop is unusable when running on a Virtual Machine,
you can:

a) Disable composition from the menu: User Settings > Compositor > Disable Compositing
You will lose round corners, shadows etc but the desktop will be usable.
To make it permanent: menu > User Settings > BunsenLabs Session > Edit autostart
and comment out this line:
    bl-compositor --start

or

b) If your virtual machine manager supports it, enable OpenGL and 3D acceleration.
If you are using virt-manager:
In the settings menu, open up the Display Spice section:
Select Spice server for Type:, and None for Listen type.
Check the OpenGL checkbox. Hit Apply.
In the Video Virtio section:
Set Virtio for Model, and check the 3D acceleration checkbox. Hit Apply.

See also:
https://forums.bunsenlabs.org/viewtopic … 23#p141523
https://ryan.himmelwright.net/post/virtio-3d-vms/

Last edited by johnraff (2026-02-27 02:33:18)

26 January, 2026 12:00AM

January 23, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

23 January, 2026 10:39AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

Breaking: XDG Adds Native Support for Linyaps

In the world of Linux desktop computing, there exists a foundational "common language" that underpins all interoperability—the XDG specifications, developed and maintained by the freedesktop.org organization. XDG is the critical standard for solving Linux's ecosystem fragmentation and establishing unified resource access protocols. Whether you are an application developer or a distribution maintainer, ensuring your product runs well on a modern Linux desktop necessitates adherence to the XDG standard. It is the key cornerstone enabling the Linux desktop to evolve from "working in silos" to "unified collaboration." From desktop icons and application menus to system notifications and file dialogs, XDG specifications permeate every facet ...Read more

23 January, 2026 09:54AM by xiaofei

January 22, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

22 January, 2026 08:27AM by Nieves Álvarez

January 21, 2026

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generate YAML from ERB, what could possibly go wrong?!

Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.

Enter cloud-init schema, or so I thought. Turns out running cloud-init schema is rather broken without root privileges, as it tries to load a ton of information from the running system. This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself. I've not found a way to disable that behavior.

Luckily, I know Python.

Enter evgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3

import sys
from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError

try:
    valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema())
    if not valid:
        raise RuntimeError("Schema is not valid")
except (SchemaValidationError, RuntimeError) as e:
    print(e)
    sys.exit(1)

The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API, as it will sometimes raise an SchemaValidationError, sometimes a RuntimeError and sometimes just return False. No idea why. But the above just turns it into a couple of printed lines and a non zero exit code, unless of course there are no problems, then you get peaceful silence.

21 January, 2026 07:42PM

hackergotchi for Deepin

Deepin

January 20, 2026

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-64155: In the Wild Exploitation of FortiSIEM for Unauthenticated Root-Level RCE

On January 13th, 2026, Fortinet publicly disclosed and patched CVE-2025-64155 (CVSS 9.8) affecting FortiSIEM along with five additional vulnerabilities across its product line [1][2][3][4][5]. In particular, CVE-2025-64155 represents high-risk exposure; immediately after its release, active exploitation was reported. The flaw was responsibly disclosed to Fortinet almost six months ago (August 2025), by Horizon3.ai. Greenbone includes […]

20 January, 2026 07:53AM by Joseph Lee

January 19, 2026

hackergotchi for Deepin

Deepin

deepin 25.0.10 Release Note

In order to further optimize the deepin 25 system update experience and enhance stability, the deepin 25.0.10 image is now officially released. This update focuses on system installation experience, file management, system interaction, and stability, optimizing multiple high-frequency usage scenarios, fixing a large number of known issues, and improving system smoothness and reliability.   Key Updates in This Release System Installer: Optimized the prompt text for data formatting during full-disk installation, now supporting the option to retain user data and reuse the original account data, configurations, and files. Comprehensive Upgrade of File Manager: Added practical features such as automatic scrolling during file ...Read more

19 January, 2026 05:41AM by xiaofei

January 15, 2026

hackergotchi for Tails

Tails

Tails 7.4

New feature

Persistent language and keyboard layout

You can now save your language and keyboard layout from the Welcome Screen to the USB stick. These settings will be applied automatically when restarting Tails.

If you turn on this option, your language and keyboard layout are saved unencrypted on the USB stick to help you type the passphrase of your Persistent Storage more easily.

Changes and updates

  • Update Tor Browser to 15.0.4.

  • Update Thunderbird to 140.6.0.

  • Update the Linux kernel to 6.12.63.

  • Drop support for BitTorrent download.

    With the ongoing transition from BitTorrent v1 to v2, the BitTorrent v1 files that we provided until now can become a security concern. We don't think that updating to BitTorrent v2 is worth the extra migration and maintenance cost for our team.

    Direct download from one of our mirrors is usually faster.

Fixed problems

  • Fix opening .gpg encrypted files in Kleopatra when double-clicking or selecting Open with Kleopatra from the shortcut menu. (#21281)

  • Fix the desktop crashing when unlocking VeraCrypt volumes with a wrong password. (#21286)

  • Use 24-hour time format consistently in the top navigation bar and the lock screen. (#21310)

For more details, read our changelog.

Get Tails 7.4

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4 directly:

15 January, 2026 12:00AM

January 14, 2026

hackergotchi for ZEVENET

ZEVENET

Cloud security works, but not as a unified system

Talking about cloud today is no longer about a technological trend, but about a central piece of the business. More and more companies are moving their infrastructure to cloud providers under the promise of less hardware, less maintenance, fewer licenses and less time spent on activities that do not generate value.

Much of that promise has been fulfilled. Cloud has democratized capabilities that only large organizations could access a few years ago. Launching a service, increasing capacity or deploying a new region is now easier, faster and more accessible.

However, as often happens with technology, the story changes when we zoom into operations. Cloud simplifies infrastructure, but it does not always simplify how that infrastructure is operated. And that nuance affects not only technical teams, but also the business itself.

Cloud providers don’t sell “solutions” — they sell components

The first point of friction does not appear in compute or storage, but in the services that accompany the infrastructure. This includes security, load balancing, TLS certificates, application firewalls, monitoring and observability.

In the cloud provider’s catalog, the technology is there, but it is sold as separate components. Security on one side, certificates on another, observability on another, and advanced capabilities billed as add-ons. The customer does not go without service, but is left with a recurring question: what exactly must be purchased to remain protected and operate reliably?

A less visible aspect also emerges: security is billed per event, per inspection or per volume of traffic. What used to be a hardware expense becomes a bill based on requests, analysis and certificates. Cloud solved hardware, but externalized the operational complexity of security.

Metrics and logs exist, but they are often fragmented, sampled and weakly correlated. Understanding what happened during an incident may require navigating multiple services and data models. Cloud promises security, but it rarely promises explanations.

And at its core this is not a technical problem, but a model problem. Cloud security is commercialized as a product but consumed as a service. And when there is a mismatch between how something is purchased and how it is used, friction eventually appears.

SkudoCloud as an example of the managed approach

This is the context in which SkudoCloud emerges — not to replace the cloud provider or compete as infrastructure, but to resolve the operational coherence between load balancing, security and visibility.

SkudoCloud is a SaaS platform that enables companies to deploy advanced load balancing and application protection without assembling separate modules, tools or services. From a single interface, organizations can:

  • manage SSL/TLS certificates
  • inspect encrypted traffic
  • apply WAF rules
  • distribute load across backends
  • and monitor application behavior

The most evident difference appears in security. In the modular cloud model, the customer must decide what to purchase, which rules to enable, how to correlate logs and how to keep everything updated. In a managed model like SkudoCloud, certificates, WAF, TLS inspection and load balancing behave as one coherent system.

This has direct consequences for the business:

  • it reduces operational uncertainty
  • it improves visibility during incidents
  • and it avoids billing models tied to traffic volume or number of inspections

Instead of acquiring security, companies acquire operability. Instead of assembling components, they obtain an outcome. That is the difference of a managed approach.

Conclusion

Cloud adoption is already a given. The real question now is how to operate it sustainably. Fragmentation was a natural side effect of the migration phase. Unification will likely be the central theme of the operational phase.

Cloud simplified servers. Now it is time to simplify operations.

14 January, 2026 10:25AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

January 13, 2026

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: December 2025

"Fit and finish" appears in many industries. For much of the software industry, it refers to features that complete a fit for a target audience, ensuring that audience can use the product for their needs. At a frame shop, it means literally fitting the mounted artwork into a frame, then finishing the back of the frame.

At Purism, fit takes on another meaning - making apps fit on screens the size of the Librem 5.

The post PureOS Crimson Development Report: December 2025 appeared first on Purism.

13 January, 2026 10:11PM by Purism

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Forum downtime.

Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.

13 January, 2026 12:00AM

[DONE] BunsenLabs Carbon Release Candidate 2 iso available for testing

RC3 is now available , so please test that one - links here: https://forums.bunsenlabs.org/viewtopic.php?id=9682

---
As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon RC2 candidate iso file available for download here:
https://sourceforge.net/projects/bunsen … hybrid.iso
sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d

If you have a free machine or VM to install it on, please give it some testing!

And please post any bugs here: https://forums.bunsenlabs.org/viewtopic.php?id=9656
That thread is now closed because having multiple bug reports mixed up together was too confusing. Please post any new bugs related to the Carbon RC2 iso in individual threads in the Bug Reports section, adding a tag [Carbon RC2] .

When it seems as if there aren't any bugs left to squash, we can do an Official Release. cool

Last edited by johnraff (2026-02-03 07:37:59)

13 January, 2026 12:00AM

January 11, 2026

hackergotchi for SparkyLinux

SparkyLinux

Labwc

There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…

Source

11 January, 2026 11:42AM by pavroo

January 10, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 2025

Foto der hier vorgestellten Bücher

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

  • Russische Spezialitäten, Dmitrij Kapitelman. Was für ein Feuerwerk von einem Buch, sprachgewaltig, traurig, amüsant.
  • Die Jungfrau, Monika Helfer. Nach Helfers “Die Bagage”, “Löwenherz” und “Vati” war natürlich auch dieses Buch Pflichtlektüre für mich.
  • Das Buch zum Film, Clemens J. Setz. Wunderbare Alltagsbeobachtungen und Bonmots – ich hab eigentlich nur eine Kritik: mit 192 Seiten zu kurz.
  • Wackelkontakt, Wolf Haas. Jaja, ein bekannter Bestseller etc. Aber er ist und bleibt einer meiner Lieblingsautoren. Ich war bei seiner Lesung in Graz und habe das Buch im Anschluss sogar noch ein zweites Mal gelesen, und es keine Sekunde bereut. Sprachkünstler, Hilfsausdruck!
  • Fleisch ist mein Gemüse, Heinz Strunk. Ich liebe Background-Geschichten, speziell wenn es um Musik bzw. das Musikerleben geht, und das ist hier mit dem Ausflug in die Branche der Tanzmusik der Fall. Bis auf einige wenige Ausnahmen flutscht es beim Lesen.
  • Wut und Wertung: Warum wir über Geschmack streiten, Johannes Franzen. Warum eskalieren Konflikte über Geschmack, Kunst und Kanon? Warum ist Streiten über Geschmack eine wichtige Kulturtechnik? Franzen arbeitet das anhand von tatsächlich existierenden Kontroversen und Skandalen auf, lehrreich und anregend.
  • Klapper, Kurt Prödel. Fans von Clemens J. Setz kennen natürlich Prödel, und da ich auch Coming-of-Age-Romanen mag, war das ein doppelter Volltreffer. Ich freue mich schon auf sein neues Buch “Salto”!
  • Hier treibt mein Kartoffelherz, Anna Weidenholzer. Ich kann absolut nichts mehr zu diesem Buch sagen, aber ich hab’s echt gern gelesen.
  • Die Infantin trägt den Scheitel links, Helena Adler. Das Buch hatte einen interessanten Sog auf mich, ich wollte es einfach weiterlesen. Die verspielte Sprache und Wortspiele haben es noch feiner gemacht.
  • Das schöne Leben, Christiane Rösinger. Ich hab Rösingers Bücher von Kathrin Passig empfohlen bekommen (Volltreffer, danke!). Ich hab mir auch alle anderen Bücher von Rösinger (“Berlin – Baku. Meine Reise zum Eurovision Song Contest”, “Zukunft machen wir später: Meine Deutschstunden mit Geflüchteten”, “Liebe wird oft überbewertet”) besorgt, und sehr gerne gelesen.

10 January, 2026 05:29PM

January 09, 2026

hackergotchi for Proxmox VE

Proxmox VE

New Archive CDN for End-of-Life (EOL) Releases

Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older.

The archive is reachable via the following URLs:

To use the archive for an EOL release, you will need to change the domain in the apt repository configuration...

Read more

09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)

January 08, 2026

hackergotchi for GreenboneOS

GreenboneOS

December 2025 Threat Report: Emergency End-of-Year Patches and New Exploit Campaigns

In 2025, Greenbone increased the total number of vulnerability tests in the OPENVAS ENTERPRISE FEED to over 227,000, adding almost 40,000 vulnerability checks. Since the first CVE was published in 1999, over 300,000 software vulnerabilities have been added to MITRE’s CVE repository. CVE disclosures continued to rocket upward, increasing roughly 21% compared to 2024. CISA […]

08 January, 2026 01:05PM by Joseph Lee

January 07, 2026

hackergotchi for Deepin

Deepin

January 06, 2026

hackergotchi for VyOS

VyOS

VyOS 1.4.4 LTS Achieves Nutanix Ready Validation for AOS 7.3

We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.

This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.

06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)

hackergotchi for Deepin

Deepin

January 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

New utility: xml2xfconf

In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079

It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf  database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.

In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing  a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.

So, this script called  xml2xfconf . Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately. cool

Example usage:

restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"

Here's what got written into $restore:

xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000

xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.

And then the Carbon release can get rolling again.

It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce  app settings. smile

03 January, 2026 12:00AM

January 02, 2026

hackergotchi for ZEVENET

ZEVENET

How to Evaluate a WAF in 2026 for SaaS Environments

Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.

Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.

As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.

Evaluating a WAF in 2026 is fundamentally different

Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.

A modern evaluation must answer practical, operational questions:

  • Can the WAF block malicious traffic without breaking legitimate flows?
  • Does it behave consistently in prevention mode and under load?
  • Can its decisions be observed, explained, and audited?

In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.

Why most SaaS WAF evaluations fall short

Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:

  • Testing in monitor-only mode instead of prevention
  • Relying on default configurations with no real traffic
  • Ignoring operational limits until production
  • Inability to trace why a request was blocked

In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.

A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.

Detection quality is defined by false positives, not demos

Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.

A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.

Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.

At scale, even a low False Positive Rate (FPR) can result in:

  • Broken user flows
  • Failed API calls
  • Increased operational load
  • Pressure to weaken or disable protections

This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.

A realistic PoC should include scenarios like:

Source of false positives Real-world example What to test
Complex request bodies Deep JSON, multipart forms Recorded API and UI traffic
Business logic flows Search, filtering, checkout End-to-end navigation
Uploads PDFs, images, metadata Real upload paths
Atypical headers Large cookies, custom headers Reverse proxy captures

In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.

SKUDONET Cloud Solution

SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.

That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.

To support that evaluation, we have documented the full methodology in our technical guide:

👉 Download the full guide:

02 January, 2026 11:03AM by Nieves Álvarez

January 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/12

The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.

Source

01 January, 2026 08:08PM by pavroo