BEAST, CRIME, BREACH, and Lucky 13: Assessing TLS in ADCS

1. Summary

Several TLS attacks since 2011 impel a reassessment of the security of ADC’s usage of TLS to form ADCS. While the specific attacks tend not to be trivially replicated in a DC client as opposed to a web browser, remaining conservative with respect to security remains useful, the issues they exploit could cause problems regardless, and ADCS’s best response thus becomes to deprecate SSL 3.0 and TLS 1.0. Ideally, one should use TLS 1.2 with AES-GCM. Failing that, ensuring that TLS 1.1 runs and chooses AES-based ciphersuite works adequately.

2. HTTP-over-TLS Attacks

BEAST renders practical Rogaway’s 2002 attack on the security of CBC ciphersuites in SSL/TLS by using an SSL/TLS server’s CBC padding MAC acceptance/rejection as a timing oracle. Asking whether each possible byte in each position results in successful MAC, it decodes an entire message. One can avert BEAST either by avoiding CBC in lieu of RC4 or updating to TLS 1.1 or 1.2, which mitigate the timing oracle and generate new random IVs to undermine BEAST’s sequential attack.

CRIME and BREACH build on a 2002 compression and information leakage of plaintext-based attack. CRIME “requires on average 6 requests to decrypt 1 cookie byte” and, like BEAST, recognizes DEFLATE’s smaller output when it has found a pre-existing copy of the correct plaintext in its dictionary. Unlike BEAST, CRIME and BREACH depend not on TLS version or CBC versus RC4 ciphersuites but merely compression. Disabling HTTP and TLS compression therefore avoids CRIME and BREACH.

One backwards-compatible solution thus far involves avoiding compression due to CRIME/BREACH and avoiding BEAST with RC4-based TLS ciphersuites. However, a new attack against RC4 in TLS by AlFardan, Bernstein, et al exploits double-byte ciphertext biases to reconstruct messages using approximately 229 ciphertexts; as few as 225 achieve a 60+% recovery rate. RC4-based ciphersuites decreasingly inspire confidence as a backwards-compatible yet secure approach to TLS, enough that the IETF circulates an RFC draft prohibiting RC4 ciphersuites.

Thus far treating DC as sufficiently HTTP-like to borrow their threat model, options narrow to TLS 1.1 or TLS 1.2 with an AES-derived ciphersuite. One needs still beware: Lucky 13 weakens even TLS 1.1 and TLS 1.2 AES-CBC ciphers, leaving between it and the RC4 attack no unscathed TLS 1.1 configuration. Instead, AlFardan and Paterson recommend to “switch to using AEAD ciphersuites, such as AES-GCM” and/or “modify TLS’s CBC-mode decryption procedure so as to remove the timing side channel”. They observe that each major TLS library has addressed the latter point, so that AES-CBC might remain somewhat secure; certainly superior to RC4.

3. ADC-over-TLS-specific Concerns

ADCS clients’ and hubs’ vulnerability profiles and relevant threat models regarding each of BEAST, CRIME, BREACH, Lucky 13, and the RC4 break differ from that of a web browser using HTTP. BEAST and AlFardan, Bernstein, et al’s RC4 attack both point to adopting TLS 1.1, a ubiquitously supportable requirement worth satisfying regardless. OpenSSL, NSS, GnuTLS, PolarSSL, CyaSSL, MatrixSSL, BouncyCastle, and Oracle’s standard Java crypto library have all already “addressedLucky 13.

ADCS doesn’t use TLS compression, so that aspect of CRIME/BREACH does not apply. The ZLIB extension does operate analogously to HTTP compression. Indeed, the BREACH authors remark that:

there is nothing particularly special about HTTP and TLS in this side-channel. Any time an attacker has the ability to inject their own payload into plaintext that is compressed, the potential for a CRIME-like attack is there. There are many widely used protocols that use the composition of encryption with compression; it is likely that other instances of this vulnerability exist.

ADCS provides an attacker this capability via logging onto a hub and sending CTMs and B, D, and E-type messages. Weaponizing it, however, operates better when these injected payloads can discover cookie-like repeated secrets, which ADC lacks. GPA and PAS operate via a challenge-reponse system. CTM cookies find use at most once. Private IDs would presumably have left a client-hub connection’s compression dictionary by the time an attack might otherwise succeed and don’t appear in client-client connections. While a detailed analysis of the extent of practical feasibility remains wanting, I’m skeptical CRIME and BREACH much threaten ADCS.

4. Mitigation and Prevention in ADCS

Regardless, some of these attacks could be avoided entirely with specification updates incurring no ongoing cost and hindering implenetation on no common platforms. Three distinct categories emerge: BEAST and Lucky 13 attacks CBC in TLS; the RC4 break, well, attacks RC4; and CRIME and BREACH attack compression. Since one shouldn’t use RC4 regardless, that leaves AES-CBC attacks and compression attacks.

Disabling compression might incur substantial bandwidth cost for little thus-far demonstrated security benefit, so although ZLIB implementors should remain aware of CRIME and BREACH, continued usage seems unproblematic.

Separately, BEAST and Lucky 13 point to requiring TLS 1.1 and, following draft IETF recomendations for secure use of TLS and DTLS, preferring TLS 1.2 with the TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 or other AES-GCM ciphersuite if supported by both endpoints. cryptlib, CyaSSL, GnuTLS, MatrixSSL, NSS, OpenSSL, PolarSSL, SChannel, and JSSE support both TLS 1.1 and TLS 1.2 and all but Java’s supports AES-GCM.

Suggested responses:

  • Consider how to communicate to ZLIB implementors the hazards and threat model, however minor, presented by CRIME and BREACH.
  • Formally deprecate SSL 3.0 and TLS 1.0 in the ADCS extension specification.
  • Discover which TLS versions and features clients (DC++ and variations, ncdc, Jucy, etc) and hubs (ADCH++, uHub, etc) support. If they use standard libraries, they probably all (except Jucy) already support TLS 1.2 with AES-GCM depending on how they configure their TLS libraries. Depending on results, one might already safely simply disable SSL 3.0 and TLS 1.0 in each such client and hub and prioritize TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 or a similar ciphersuite so that it finds use when mutually available. If this proves possible, the the ADCS extension specification should be updated to reflect this.

DC++ 0.825

A new security & stability update of DC++ is released today. There are no new features this time; the update fixes a couple of severe security vulnerabilities discovered since the release of  version 0.822. The following problems were fixed:

  • The client can crash in case of multiple partial file list uploads, requested at the same time or shortly one after the other. This problem hits the previous two releases (versions 0.820 & 0.822).
  • The originator of some type of ADC protocol messages aren’t correctly verified. This allows a malicious client to block outgoing connections of other users logged into an ADC hub by sending commands to be accepted from the hub only. This problem exists in all earlier versions of DC++ and the solution needs fixes in various ADC hubsoftware as well. More detailed description of this vulnerability can be found in the original bug report.

Due to the nature of these bugs an immediate upgrade is recommended.

The road ahead: Security and Integrity

The community we are part of has had its fair share of security threats. The security threats have originated from software bugs, protocol issues, malicious users and even from the developers of the network.

Security and integrity are very broad terms and my use for them is indeed broad, as I believe they address multiple points and need not necessary simply be about remotely crashing another user. A system’s security and integrity are tightly coupled and may sometimes overlap.

There is a variety in the number of issues we face.

Issue 1: Software issues
Writing software is hard. Really hard. It’s even more difficult when you also include the possibility for others to impact your system (client/hub etc): chat messages, file sharing exchange etc. Direct Connect hinges upon the ability to exchange information with others, so we cannot simply shut down that ability.

A software issue or bug arise differently depending on what type of issue we’re talking about.

The most typical bug is that someone simply miswrote code, “oops, it was supposed to be a 1 instead of a 0 here”.

The more difficult bugs to catch — and consequently fix — are design issues, which can be caused by a fundamental use of a component or the application’s infrastructure, “oops, we were using an algorithm or library that has fundamental issues”.

A security issue may stem from an actual feature — for instance the ability to double click magnet links. That is, the bug itself is that the software is not resilient enough for a potential attack. That is, there’s nothing wrong with the code itself, it simply isn’t built to withstand a malicious user. (Note: This is not a reference to the magnet links, they were simply an example.)

A software bug may not only cause malicious users or (other) software to exploit the system, they may also cause the integrity of content the crumble. For instance, pre-hashing, the ability to match different files to each other were done via reported name and file size. This was ultimately flawed as there was no way of identifying that the two files were identical, beyond the name and size, both of which can be easily faked.

A software issue may be addressed by simply blocking functionality (e.g., redirects to certain addresses, stopping parsing after X character etc). While this is the simplest course of action, removing functionality is often not what users want.

Issue 2: Protocol issue or deficiencies
Systems and protocols that allow users to perform certain actions carry with them a set of potential security issues. The problem with writing a protocol is that other people need to follow it: the developers of a piece of software may not be the same as the developers for the protocols. For Direct Connect, there’s a very close relationship between the two groups (it’s actually closer to one group at the time of writing), so this issue may not be that severe. However, there will always be a discrepancy between the maintainers of the protocol and software. Imagine the scenario where the developers for a software suddenly disappear (or are otherwise not continuing updates). The developers for the protocol cannot do anything to actually address issues. In the reverse situation, the software developers can simply decide for themselves (effectively creating their own ‘protocol group’) that things need to be updated and do so.

Any protocol issue is hard to fix, as you must depend on multiple implementations to manage the issue correctly. The protocol should also, as best as it can, provide backwards compatibility between its various versions and extensions. Any security issue that comes in between can greatly affect the situation.

A protocol issue may also simply be that there’s not enough information as to what has happened. For example, the previous DDoS attacks were possible to (continue to) do as there weren’t an ability for the protocol to inform other clients and hubs (and web servers etc) what was happening.

The original NMDC had no hashes and as such no integrity verification for files. This was a fundamental issue with the protocol and extensions were provided later on to manage the (then) new file hashing. This wasn’t so much a bug in the protocol, it was simply that it was a feature NMDC’s founder hadn’t thought of.

When software is told to interact in a certain way according to the protocol, then those actions are by effect the protocol’s doing. For example, the (potential) use of regular expressions for searches are not a problem for the protocol itself: the specification for regular expressions in ADC is quite sparse and very simple. However, the problem with regular expressions is that they’re expensive to do and any client that will implement that functionality effectively will open themselves up to a world of hurt if people are malicious enough. While the functionality lies in the software’s management of the feature, it is the protocol that mandates the use of it. (Note: In ADC, regular expressions are considered an extension. Any extension is up to the developers to implement if they so choose. That is, there is no requirement that clients implement regular expressions. However, those that do implement the functionality of the regular expressions, are bound by the protocol when they announce so.)

Issue 3: Infrastructure
The infrastructure of the system must withstand security threats and issues.

If a hosting service would go down for a particular software, then that software cannot make updates responding to upcoming issue. Official development simply stops at that point on that service (and the developers need to find another route).

If a hosting service decide to remove old versions (say, because it had a 2 year pruning of software or for legal matters) then someone need to keep backups of the information.

A large part in the DC infrastructure is the ability to connect to the available hublists. This issue was apparent a few years ago when the major hublists were offline while various software didn’t update. People simply couldn’t connect to hubs, and for beginners this is even more annoying. There are now various mitigation approaches to handle these scenarios, such as local caching, proxy/cloud caching and even protocol suggestions to handle these scenarios and distribution avenues.

Infrastructure isn’t simply being able to download software and connect to a hublist, it is also the ability to report bugs, request features and get support for your existing software and resources.

A very difficult problem with infrastructure is that it is often very costly (money) (for the developers) to set up. Not only that, it must be properly done, which is also costly (time) and hard. Moreover, most people aren’t experts at setting up resources of this kind, and there is lots of information available online for avenues of attacks against forums and websites.

Infrastructure issues can be aided by moving some services out in a distributed manner (whilst a set of people maintain the resources) and moving some services out to the users in a distributed manner (for example, allowing clients to automatically exchange hub lists). Obviously, the services must be there from the start, otherwise there’s little one can do.

Issue 4: People

Software, infrastructure and our ideas only last so far. If a person has the means and intent, they can cause various problems for the rest of the community. Most of the time, we envision a person trying to cause havoc using a bug in the system (or equivalent) but that is not the only concern we have when it comes to people and their interactions.

While a person with the know-how and the tools can cause a tremendous problem, the people that can cause the most problem are those who control key resources within the system. For example, a hub operator may cause problems in a hub by kicking and banning people. But the hub owner can do much more than that, since they control the very resource that people are using.

That means the developers and owners of each resource must guard themselves against others who they share that control with. This is primarily a problem when the two (or more) people who share a resource disagree with an issue, and one party decide that they want to shut down that resource. The last instance of this was last year and was with ADCPortal and other similar problems have occurred in the past.

The problem with this is that we all need to put trust in others. If we don’t, we can’t share anything and the community crumbles. A problem with resource ownership and control is a general problem of responsibility: if I own a resource (or have enough control over it), I am expected to continue developing it and nurturing that resource. If I do nothing as a response to security issues (and any other issue) then that resource eventually needs to be switched out.

The solution is to share resources in such a way that allow people to contribute as much as possible. The community should encourage those who are open about content, and try and move away from a “one person control everything” system. This is extra difficult and puts the pressure on all of us.

The road ahead

Security cannot be obtained by not addressing the problems we face. The community gain very little by obfuscating the ‘when’ and ‘how’ when it comes to security: it only slows down any malicious party so much by not being open about the security issues we face.

Disclosure of security issues is an interesting aspect and the developers owe it to the community to be as direct as possible. It does not help if we wait one day, one week or one year to inform people, anyone vigilant enough will discover problems regardless when and how we announce them. Any announcement (or note in a changelog or other way of information) shouldn’t cause people to treat the messenger badly. Instead, the key is to have an open dialog between developers, hub owners, users and anyone else involved in the community. The higher the severity of the security issue, the more reason to treat any potential forthcoming issue directly and swiftly. I believe it would also be good if someone reviewed past security issues and put them together in a similar article or document, essentially allowing current and future developers to see problems that have been encountered and hopefully how they were solved (this has been done to a certain extent). Discussing security issues with security experts from various companies may also be a way forward.

The community must be active in security and integrity issues. A common phrase for development is to be “liberal what you accept and conservative what you send out”. This applies to both software and protocol development.

Software should have clear boundaries where input from another user or client can cause an impact.

Protocols should be reactive in what hashing methods, algorithms and general security it uses. The new SHA-3 standard is interesting in this aspect, and it would be good if we would switch to something that provide a higher security or integrity for us. Direct Connect has gone from a clear-text system to a secure-connection system (via TLS and keyprints). The system could further be extended with the use of Tor or other anonymous services, to provide that anonymity that other systems have.

The security of our system shouldn’t depend on “security by obscurity”; before DC++ added an IP column to its user list, people (incorrectly) believed that their IP was “secret”. The security of our system shouldn’t depend on obfuscating security issues, since they’ll only hit us even harder in the future. There are other cases where the normal user doesn’t know enough security aspects. For example when people disclosed how you could as a hub owner sniff out all data from their hub and their users’ interactions. While I strongly believe it’s difficult to educate your users (on any topic, really), you shouldn’t lie to them. Provide instead ample evidence and reassurance that the information is treated with care and that you as developers and fellow user consider security an important point.

Security is tricky because it may sometimes seem like there’s a security issue when there’s in fact not. This makes it important for us to investigate issues and not rush for a solution. It is also important that people don’t panic and go around yelling “security problem!”, as if there’s no tomorrow (I’ve been the source of such a scare, I’ll admit). Equally important is that those who knows more about security should be the decider of protocol and software aspects, as the topic shouldn’t be subject to whimsical changes “because it makes no sense, right?” (I’ll once again, unfortunately, admit of being the cause of such an issue, regarding ADC — but hopefully will be rectified soon-ish).

The road ahead is to investigate security issues in a timely but proper manner, be pro-active and be up front with problems. Time should be spent to investigate a component’s weaknesses and that component should then be discarded if the hurdles are too difficult to overcome.

Yet another remote crash disclosal

As one of the most easily exploitable remote crash in the history of DC++ is explained earlier today, let me reveal an older one that has been kept away from the public so far.

The problem in question is a bug in handling queue items for partial file list requests. Though the bug can be used for a remote crash, it is far not as critical as the one with magnet link formatting. The scenario is pretty well described in the filed bug report which is now also made avaliable to the public.

To summarize: the crash can happen only if the attacker is able to convince the victim to browse his/her filelist. As the attacker’s nick should be changed in the right time for a successful exploit, a malicious partial list item will remain in the queue. The victim should manually delete this unfinished queue item from the download queue for a chance to be crashed. Moreover, as nick changes are allowed only on ADC hubs, this bug is not exploitable on NMDC.

The problem was fixed in DC++ 0.790 and should hit any older versions what is already capable to connect to ADC hubs.

Mainchat-crashing DC++ 0.797 and 0.799

DC++ 0.800 fixes a bug wherein multiple magnet links in one message causes a crash. To crash DC++ 0.797 and 0.799, send a main chat message with multiple magnet links. It requires no special operator privileges and can cause general disarray fairly easily.

Since DC++ versions prior to 0.790 are vulnerable to several remote crash exploits themselves (for 0.782), only DC++ 0.790 and DC++ 0.801 remain secure. Other versions, including the ever-popular DC++ 0.674, can be crashed by untrusted, remote users.

May improved security ever prevail.

How to crash DC++ 0.674

$ADCGET list //// 0 -1 ZL1|

A previous blog post mentions this, but apparently isn’t sufficiently explicit about what to send.

I aim to fix that.

Enjoy, all. This apparently works on DC++ clients older than 0.707 which still support $ADCGET.

Long lost response regarding DC being used as a DDoS tool

A really long time ago, I was interviewed regarding the play that DC has concerning DDoS:ing. GargoyleMT was interviewed by a Brian Krebs (washingtonpost.com) and the following is what he said to Krebs. I don’t think Krebs published anything (or at least I can’t find it). Note that the date of this mail is 2007-05-25. (I don’t know why, but the above WordPress post have a newer timestamp than when the mail conversation took place. As the SecurityFocus article indicate, it’s around the later part of May.)

Brian, I’m not sure if you’re still looking for information about what Prolexic (and now Netcraft) have reported about attacks using the Direct Connect network.

A little bit of history may help understand what the Direct Connect network is. It got its start in December of 1999 by Jon Hess, then a high school student. It was heavily inspired by Internet Relay Chat (IRC), and the social aspect of chatting can be seen in his design (I have a couple old interviews of him bookmarked at home that may give a little more information). This was the year of Napster, when peer-to-peer networks were getting their start, and before Justin Frankel (of Winamp) had released Gnutella (which first pioneered decentralized peer to peer networks). Direct Connect was designed around separate, user run, independent hubs, tied together only loosely by a “hub list.” This design is a lot more like Napster’s centralization than Gnutella’s decentralization, especially since hubs themselves do not interlink (though there are some protocol commands for doing so.) Because of this design, Jon developed two separate programs: a client software (which we call NMDC for NeoModus Direct Connect [NeoModus was the company name Jon used to publish his software (see the Wayback machine at http://web.archive.org/web/*/www.neo-modus.com)%5D) and a hub software. Each hub software had an option to register on the hub list, but it was not mandatory.

Shortly after it became popular, many people worked on reverse engineering the protocol that Jon used. Once enough knowledge of the protocol was obtained, clients were created, including DC++ by Jacek Sieka in November of 2001. Today, nearly all of the clients on the Direct Connect network are open source, and quite a few hubs are as well. The protocol used today is nearly identical, but (mostly) backwards compatible with the original client and hub. Jon’s software has fallen out of favor, and DC++ is (probably) the most popular client for the network. There are also many derivatives of DC++, since it is licensed under the GNU General Public License. There are a number of hubs, YnHub ( http://ynhub.org/) is one of the more popular ones, since it works on Windows, has a nice GUI, and contains enough options so that hub owners can run hubs the way they like. Hubs have grown, but a “big” hub is well under 10,000 users, and most probably in the 500 – 2500 user range.

The abuse, as can see it, doesn’t exploit any bugs in DC++ per-se. Nothing as glorious as buffer overflows, at least. Only not armoring itself against ways the protocol could be misused to hurt others. The protocol was intended to be proprietary, and wasn’t designed to protect against malicious clients or hubs.

The two commands which are being exploited are the following commands:
$ForceMove <ip or address:optional port>
This command forces a DC client to disconnect from the current hub and try to connect to the address specified. (It is used in some multi-hub configurations to shuffle users between hubs, generally as a form of load balancing.) The original DC hub software had a port of 411, but it allowed customization. A malicious hub can “$ForceMove http://www.example.com:80&#8221; to multiple users and get them to try to connect to that server using the DC protocol. In DC++ 0.699 (released Dec. 18, 2006), DC++ will try to connect once, but not reconnect unless it has successfully completed a full Direct Connect handshake with the remote address. This type of attack shouldn’t be very effective with DC++ 0.699. Versions before this will reconnect on a slightly variable scale, in between 2 and 3 minutes. ($ForceMove is what we typically classify as an “operator” command, so normal users should not (unless the hub is configured for it) be able to use this command to initiate an attack. Rogue operators on white hat hubs could, however.)

$ConnectToMe <RemoteNick> :<SenderPort>
This is the command that instructs the receiving user (<RemoteNick>) to try to connect to SenderIp on SenderPort (via TCP). This connection is nearly exclusively for downloading of files. This command, as does the above, and most others, passes through the hub. A white hat hub will check <SenderIp> against the IP address of the sender, and only relay the command if they match. A black hat hub may not do that. Or worse, it may modify well formed $CTMs (as we shorten it) to contain the IP of a machine it would like pummeled with connections. DC++ (as will any DC client) will try to connect to the remote IP on the specified port once. It will not retry on its own, but it will try one time per $CTM. (I'm not sure whether it can be persuaded to try multiple connections to the same IP/port at the same time.) This attack cannot succeed without the complicity of the hub in the attack.

Prolexia certainly has drawn attention to this subject, but they're not the first to suffer such attacks. Hublist.org, created by Marko Virkkula (aka Gadget), was the default hub list for DC++ for a long period of time (July 2003). Hublist.org has been experiencing attacks since April of 2006, and the methods used above may be a direct result of his war of escalation against the attackers. A domain I bought ( dcpp.net) to host DC++'s web presence was definitely attacked by one of the two above methods. We changed hosting companies once, but were ultimately forced to pare back our web site and move a smaller version of it back to sourceforge.net's project space. I wasn't involved enough in the administration of either host machine to come away with the specifics of the attack, other than that it was DC traffic directed to the HTTP port.

As for preventing or mitigating the severity of this type of attack, I think there are a couple key points. We cannot change the protocol radically to fix this, as we're (Jacek Sieka, Fredrik, myself, and couple other regular contributors) only in control of one of the clients. (There is a developer community that represents quite a few of the packages, but not all of them.) All client and hub software would also need to be changed, and users would have to upgrade their respective software. We have an alternate protocol under development (ADC) that should lessen the concerns (as IP addresses are distributed to each client during the initial connection to the hub). That said, users can (and should) upgrade their client when a new version of DC++ comes out. On the release of a new stable version, each user with an older client is told about it once per startup of the application. Currently, 0.698 is marked as stable, so users need to ensure they have 0.699 installed. Developers who base their DC client on DC++ can sync their client more quickly following a release of DC++, or backport all of the fixes. Most importantly, we know that some of the hubs on the DC network are not to be trusted. They may be either public hubs (registered on one or more hub list) or private hubs (unregistered but allowing new members or protected via user name and password). Users who watch the output of their client can guess whether they're being involved in an attack. For the $ForceMove attacks, one of their hub windows will show as disconnected, with a long line of "*** Connecting …" messages without a single success. Users should close this window, and be wary if they decide to visit the hub that issued the redirect. For users involved in a $ConnectToMe attack, the "transfer view" of their client will show a number of upload connections in the "Connecting…" state. Through the process of elimination, they can determine which hub is issuing these bogus connection attempts. We have been burned with these attacks as well, so we'll keep looking for ways of improving the program.

Vista SP1 loses MS support and thus DC++ support

As of July 12, 2011. DC++ will likely continue to run fine, but one might not obtain support if using Vista SP1.

I4/I6 should be broadcasted regardless of TCP4/TCP6

In ADC, it’s possible for clients to announce from which IP they’re connecting. This IP is later usually used to identify what address to connect to if they accept incoming TCP connections. That is, when you want to connect to someone, you announce ‘I4’ (for IPv4; I6 for IPv6) and send a “connect to me” message, or CTM. You also announce TCP4 or TCP6 in the feature field for others.

In ADC (in NMDC as well, really), there are two types of users; those who accept incoming TCP connections and those that do not. The former is usually called ‘active’ and the latter ‘passive’. Active users can connect to other active users. Active users can connect to passive users. Passive users can connect to active users (by a ‘reverse’ CTM message). Passive users can’t connect to other passive users (I’m going to purposefully ignore NAT traversal since it allows passive to passive connections, but it isn’t useful in this discussion).

The reverse CTM for passive to active connections work by a simple mechanism. The passive user says to the active user “RCM”, reverse connect to me. The active user will then proceed to connect to the passive user, and the downloading will commense.

Active clients are basically required to signal their address in the I4/I6 field (that is simply the nature of the field), whereas passive users are not required (but keep reading).

So far so good, nothing bad has happened.

Now, imagine the case where an active user will want to connect to another active user. They both signal I4 (I’ll use it for simplicity sakes, but it applies to I6 as well). The downloader signal address 1.1.1.1, the uploader signals 2.2.2.2. The downloader send to the uploader “connect to me”. The uploader connects through 2.2.2.2.2 to 1.1.1.1, and the communication continues. Everything’s all right for now.

Imagine instead that the downloader is a passive user. The passive user will say to the active user ‘send a connect to me so I can connect to you’. The active user will say ‘connect to 1.1.1.1’, which the passive user connects to. However, the connection can come from anywhere, and not necessarily from the IP the passive user connected to the hub with! That is because the active user does not have any knowledge from which IP the passive user should connect from.

(Now, obviously, there will be a token sent which the passive user will need to verify, but the problem of connection-point do not go away.)

So solve the problem, passive clients should publish I4/I6 regardless of whether they support TCP4/TCP6 or not. If you look at the specification, TCP4/TCP6 require I4/I6 but not the other way around, so this change (i.e. ‘everyone should send ‘I4/I6’) should not have any effect on existing implementations.

Do note that the hub can of course send I4/I6, regardless of whether the client sends it or not.

Don’t forget that you can make topic suggestions for blog posts in our “Blog Topic Suggestion Box!”

Why DC++ 0.674 is Insecure

Update 2017-10-21: Invalid ADC commands sent via UDP will crash the app, which DC++ 0.867 fixes, adds one more way to crash DC++ 0.674.

Update 2017-08-02: somehow, six (6) years later, this remains an issue. In that time, the actively developed DC++ and DC++-based clients one might try have become DC++ itself, ApexDC, AirDC++, and EiskaltDC++.

Furthermore, How to crash DC++ 0.674 describes more specifically how to remotely crash DC++ 0.674. It is strongly advised to update to a current version of an actively developed client.

Original post follows.

DC++ 0.674 remains surprisingly popular. However:

These reasons all apply to any vaguely modern client older than DC++ 0.707 (and the last three to clients through 0.75), actually, but 0.674 seems to have kept the most users of those old versions so I target it specifically. Instead, it’s much safer to use a currently-maintained client; if one prefers a pre-DC++ 0.7xx style GUI, one might look at StrongDC++ or any of its descendants.

Design a site like this with WordPress.com
Get started