Concurrent hnsw graph and builder, take two#12421
Concurrent hnsw graph and builder, take two#12421jbellis wants to merge 21 commits intoapache:mainfrom
Conversation
|
Great work! |
benwtrent
left a comment
There was a problem hiding this comment.
Very high level so far. But it seems very very complicated to me.
We should always expect an executor from the caller.
We shouldn't do one node a thread but probably batch no? Especially since random node access isn't thread safe.
I understand how this could be useful. Especially if we ever do multi-threaded merges.
It would also be good to have recall numbers to ensure correctness.
lucene/core/src/java/org/apache/lucene/util/hnsw/ConcurrentHnswGraphBuilder.java
Outdated
Show resolved
Hide resolved
|
Parallel code with one thread will build the same graph as serial code.
(This is validated by unit tests.) With multiple threads, parallel code
errs on the side of potentially adding "extra" connections. So recall
should be equal or greater and that is what I see with the 1M SIFT dataset.
Serial code: 92.0% recall over 5 runs
Parallel code built with 8 threads: 92.4% over 5 runs
…On Mon, Jul 10, 2023 at 3:54 PM Mayya Sharipova ***@***.***> wrote:
Great work!
Have you compared the recall of the parallel graph with the serially built
graph (for example using ann-benchmarks)?
—
Reply to this email directly, view it on GitHub
<#12421 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAKJLUSCMPNEWFIOJLGPALXPQCQJANCNFSM6AAAAAA2ATDZEA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Jonathan Ellis
co-founder, http://www.datastax.com
@Spyced
|
… now fail as expected when RAVV is not correctly copied)
…we'll comparing node1 with a bunch of others this also fixes the unsafe use of vectorsCopy for both v1 and v2
…s like the disk-backed RAVV implmentations
|
While making that last commit (everything still passes, after the preceeding fixes) I noticed that AbstractMockVectorValues returns null for out-of-bounds ordinals while most of the other implementations throw exceptions. (Different exceptions, depending on the implementation.) Should we standardize this? |
|
Any progress on reviewing this? We want to be able to use a concurrent implementation in Apache Cassandra and would prefer not to have to run code off a fork to do it. |
…have the same score ("two hard problems in CS...")
benwtrent
left a comment
There was a problem hiding this comment.
I didn't even really get to touch on the ConcurrentOnHeapHnswGraph class yet. The views & completion tracker seem trappy and need some careful reading.
lucene/core/src/java/org/apache/lucene/util/hnsw/HnswGraph.java
Outdated
Show resolved
Hide resolved
| graphUpperLevels.add( | ||
| new HashMap<>( | ||
| 16, levelLoadFactor)); // these are the default parameters, made explicit | ||
| } |
There was a problem hiding this comment.
We shouldn't specify the loadfactor. That doesn't make sense to me.
Also, with 16 really this is only pre-allocating up to 12 values and then it will grow again, I am not sure if that is your purpose or not.
There was a problem hiding this comment.
We specify the loadFactor because it's necessary to compute ram usage, and you can't retrieve the loadFactor once the Map is constructed.
The 16 is there because before it was just saying new HashMap() and I can't specify the loadFactor w/o also specifying the initialCapacity. As the comment indicates, 16 is the default initialCapacity.
lucene/core/src/java/org/apache/lucene/util/hnsw/OnHeapHnswGraph.java
Outdated
Show resolved
Hide resolved
lucene/core/src/java/org/apache/lucene/util/hnsw/NeighborArray.java
Outdated
Show resolved
Hide resolved
lucene/core/src/java/org/apache/lucene/util/hnsw/ConcurrentNeighborSet.java
Show resolved
Hide resolved
| // Consider the following graph with "circular" test vectors: | ||
| // | ||
| // 0 -> 1 | ||
| // 1 <- 0 | ||
| // At this point we insert nodes 2 and 3 concurrently, denoted T1 and T2 for threads 1 and 2 | ||
| // T1 T2 | ||
| // insert 2 to L1 [2 is marked "in progress"] | ||
| // insert 3 to L1 | ||
| // 3 considers as neighbors 0, 1, 2; 0 and 1 are not diverse wrt 2 | ||
| // 3 -> 2 is added to graph | ||
| // 3 is marked entry node | ||
| // 2 follows 3 to L0, where 3 only has 2 as a neighbor | ||
| // 2 -> 3 is added to graph | ||
| // all further nodes will only be added to the 2/3 subgraph; 0/1 are partitioned forever | ||
| // | ||
| // Considering concurrent inserts separately from natural candidates solves this problem; | ||
| // both 1 and 2 will be added as neighbors to 3, avoiding the partition, and 2 will then | ||
| // pick up the connection to 1 that it's supposed to have as well. | ||
| addForwardLinks(level, node, candidates); // natural candidates | ||
| addForwardLinks(level, node, inProgressBefore, progressMarker); // concurrent candidates |
There was a problem hiding this comment.
The "overpruning" seems like it is controlled by beam-width and the furtherest candidate. Why do we ignore the furthest candidate here? It seems like we shouldn't even bother with forward links with the in progress candidates if none of them are closer than the currently available one. Because, if they were already part of the graph, they would likely be ignored.
Why aren't beam width and ignoring candidates further than the current furthest one not adequate?
There was a problem hiding this comment.
to clarify -- this is about graph connectedness. all graph ANN algorithms will break (in the sense of returning terrible results) if the graph isn't connected, because there's literally no way to get from partition A of the graph to partition B. (I believe that hnsw construction doesn't guarantee connectedness but it makes it a very highly likely outcome.)
the problem is that when you consider in-progress candidates together with the natural candidates you end up with a much lower level of connectivity, because you potentially get to see high-diversity neighbors "from the future" that stop the natural neighbors from being added. in other words the diversity check will remove neighbors that would have been present in the graph if the nodes had been added serially, which in the worst case as illustrated in this example can actually result in a partitioned graph.
(beam width is about "are we searching far enough to find good candidates," so it has no special relevance to the concurrency issue here.)
lucene/core/src/java/org/apache/lucene/util/hnsw/ConcurrentHnswGraphBuilder.java
Show resolved
Hide resolved
lucene/core/src/java/org/apache/lucene/util/hnsw/ConcurrentHnswGraphBuilder.java
Show resolved
Hide resolved
| private void insertMultiple(NeighborArray others, BitSet selected) { | ||
| neighborsRef.getAndUpdate( | ||
| current -> { | ||
| ConcurrentNeighborArray next = current.copy(); |
There was a problem hiding this comment.
we are doing so many copies. The other neighbor array is sorted. Why can't we do a merge over the selected to reduce our copies?
Or is "others" always only 1 or 2 values?
There was a problem hiding this comment.
that would be a reasonable optimization, but this hasn't shown up as a hot spot in my profiling
There was a problem hiding this comment.
Looked at another profile this morning. 99.75% of insertMultiple is score comparisons, for vectors of dimension 256.
|
Haven't forgotten about this. Just been bogged down with other things. Hope to revisit again soon! |
|
Thanks for the feedback. I've switched my efforts to a DiskANN implementation in JVector, so closing this out. |
Motivation
I need to support concurrent reads and writes to an HNSW index for Cassandra. One option would be to partition the document range and assign each range to a single thread with the existing implementation. However,
Performance
Numbers are from my SIFT test harness at https://github.com/jbellis/hnswdemo/tree/lucene-bench. Numbers are averaged across 5 test runs on my i9-12900 (8 performance cores and 8 efficiency)
Serial construction of 1M nodes: 292.3s
Parallel construction, 1 thread: 379.0s = 129.7% of serial
Parallel construction, 2 threads: 191.3s = 50.5% of parallel/1
Parallel construction, 4 threads: 96.1s = 25.4% of parallel/1
Parallel construction, 8 threads: 52.6s = 13.8% of parallel/1
Serial queries of 100k vectors / top 100: 38.3s
Parallel queries, 1 thread: 41.6s = 1.09% of serial
Parallel queries, 2 threads: 21.0s = 50.5% of parallel/1
Parallel queries, 4 threads: 10.3s = 24.7% of parallel/1
Parallel queries, 8 threads: 5.3s = 12.7% of parallel/1
To summarize, there is about a 30% overhead during construction and 10% overhead at query time from using the concurrent class. The concurrent class parallelizes construction nearly perfectly through 4 threads, with some additional inefficiency becoming visible at 8. (Probably this is the effect of having to do more vector comparisons across the concurrently added nodes – I would expect this to remain relatively small and not exploding as thread counts increase.) Uncontended queries scale close to perfectly to at least 8 threads.
Design and implementation
ConcurrentOnHeapHnswGraph is very similar to OnHeapHnswGraph with concurrent backing Collections. The main addition is a getView operation to provide a threadsafe snapshot of the graph for searches. The View uses a CompletionTracker class to provide a kind of snapshot isolation – otherwise it is impossible to prevent partially added nodes from causing very difficult to debug race conditions. This is used during construction as well as for user-invoked searches.
(The initial CompletionTracker implementation was just an AtomicInteger clock and a Map<node Integer, clock Integer>, but the constant box/unbox introduced significant CPU and GC overhead. The current implementation is a result of optimizing that.)
ConcurrentHnswGraphBuilder adds an incremental ram used estimate as a return value to addGraphNode, and a buildAsync method that takes an ExecutorService for fine-grained control over parallelism. Otherwise, it follows the API of HnswGraphBuilder closely. (Closely enough that my original PR extracted a common interface and added factories so that they could be plugged in interchangeably, but this is now removed after Michael’s feedback.)
The key to achieving good concurrency while maintaining correctness without synchronization is, we track in-progress node additions across all threads in a ConcurrentSkipListSet. After adding ourselves in addGraphNode, we take a snapshot of this set (this is O(1) for CSLS), and consider all other in-progress updates as neighbor candidates (subject to normal level constraints).
In general, the main concern with the Concurrent Builder compared to the serial is to make sure that partially complete operations never “leak” to other threads. In particular,
One more point is a little subtle –
The main graph test suites are identical except for changes to perform concurrent graph builds instead of serial, and the addition of testing the incremental ram usage estimate from ConcurrentHnswGraphBuilder::addGraphNode. I also added new tests for ConcurrentNeighborSet.
Minor changes to existing code: