Skip to content
This repository was archived by the owner on Apr 26, 2024. It is now read-only.

Add endpoints for backfilling history (MSC2716)#9247

Merged
erikjohnston merged 96 commits intodevelopfrom
eric/msc2716-backfilling-history
Jun 22, 2021
Merged

Add endpoints for backfilling history (MSC2716)#9247
erikjohnston merged 96 commits intodevelopfrom
eric/msc2716-backfilling-history

Conversation

@MadLittleMods
Copy link
Contributor

@MadLittleMods MadLittleMods commented Jan 28, 2021

Implement MSC2716 to add endpoints for backfilling history. This PR does not support federation use cases with the "marker" and "insertion" events.

For reviewers, it's probably best to see this in action with the associated Complement tests.


Complement MR: matrix-org/complement#68

TARDIS visualization MR: matrix-org/tardis#1

Getting started

The PR adds the POST /_matrix/client/unstable/org.matrix.msc2716/rooms/<roomID>/batch_send?prev_event=<eventID>&chunk_id=<chunkID> endpoint which can insert a chunk of events historically back in time next to the given prev_event. chunk_id comes from next_chunk_id in the response of the batch send endpoint and is derived from the "insertion" events added to each chunk. It's not required for the first batch send.

{
    "events": [ ... ],
    "state_events_at_start": [ ... ]
}

The /batchsend endpoint is behind a feature flag: experimental_features -> msc2716_enabled (defined in homseserver.yaml). And is only available to application services so you will need to add one to your homeserver.yml and use the as_token defined to interact with the API (other tokens will 403).

state_events_at_start is used to define the historical state events needed to auth the events like join events. These events will float outside of the normal DAG as outlier's and won't be visible in the chat history which also allows us to insert multiple chunks without having a bunch of @mxid joined the room noise between each chunk.

events is chronological(oldest to newest) chunk/list of events you want to insert. There is a reverse-chronological constraint on chunks so once you insert some messages, you can only insert older ones after that. tldr; Insert from your most recent history -> oldest history.

Why? depth is not re-calculated when historical messages are inserted into the DAG. This means we have to take care to insert in the right order. Events are sorted by (topological_ordering, stream_ordering) where topological_ordering is just depth. Normally, stream_ordering is an auto incrementing integer but for backfilled=true events, it decrements. Historical messages are inserted all at the same depth, and marked as backfilled so the stream_ordering decrements and each event is sorted behind the next. (from #9247 (comment))

If you're curious to look at a known working example, the Complement tests have barebones test cases interacting with this API, matrix-org/complement#68

Steps to reproduce:

  1. In your homeserver.yaml, add the feature flag to enable the /batchsend endpoint
    experimental_features:
      # Enable history backfilling support
      msc2716_enabled: true
  2. Define an application service in your homeserver.yaml. This could be one of your existing bridges. See the application service guide for an example of what the registration file would look like. We only care about the as_token in this case.
    app_service_config_files:
      - /data/my-as-registration.yaml
  3. POST /_matrix/client/unstable/org.matrix.msc2716/rooms/<roomID>/batch_send?prev_event=<eventID>&chunk_id=<chunkID> with the Authorization: Bearer <as_token> header and the following body:
    - prev_event is the event you want to insert next to. Your historical messages will appear after this event so pick one where the timestamp makes sense. To be a little more idiomatic for inserting historical events that happened before the Matrix room creation, prev_event could be some primordial creation event for the room.
    - chunk_id comes from next_chunk_id in the previous batch send response. It connects the last (most recent) message to the insertion event of the previous chunk. The parameter is not needed for your first chunk because there is nothing to connect to yet. Note: The messages will appear correctly on your local server without it but it's important to have this set for federated servers so messages backfill correctly.
    - You can change origin_server_ts in your events to whatever you want to display it as
    - Add a join event in state_events_at_start for any message author in the events
    - m.historical will automatically be added to each your events. This is important to mark them as backfilled, sort correctly, and skip the push notifications actions.
    {
        "events": [{
            "type": "m.room.message",
            "sender": "@maria:hs1",
            "origin_server_ts": 1620336731128,
            "content": {
                "msgtype": "m.text",
                "body": "Some historical message",
                "m.historical": true,
            },
        }],
        "state_events_at_start": [{
            "type": "m.room.member",
            "sender": "@maria:hs1",
            "origin_server_ts": 1620336731128,
            "content": {
                "membership": "join",
            },
            "state_key": "@maria:hs1",
        }]
    }

Dev notes

COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh

./scripts-dev/complement.sh

API:

Relevant code:

event_creation_handler.create_and_send_nonmember_event
EventCreationHandler.create_event
EventCreationHandler.create_new_client_event
EventBuilder.build
_generate_local_out_of_band_leave

persist_events or persist_event
_maybe_start_persisting
_persist_events
_persist_events_and_state_updates
_persist_events_txn
_update_metadata_tables_txn
_handle_mult_prev_events
-> simple_insert_many_txn `event_edges`

Call stacks for passing depth, prev_event_ids, auth_event_ids:

create_and_send_nonmember_event
create_event
create_new_client_event
builder.build
update_membership
update_membership_locked
_local_membership_update
create_event

COMPLEMENT_BASE_IMAGE=complement-synapse go test -tags msc2716 -v -count=1 ./tests/main_test.go ./tests/msc2716_test.go

Accessing the database

Access the sqlite database in the Docker container after Complement runs. Be sure to change the defer deployment.Destroy(t) call in the Complement tests to a defer time.Sleep(2 * time.Hour) so the Docker container stays alive after it's finished.

$ docker exec -it b91db4912057 /bin/bash
$ apt-get update
$ apt-get install sqlite3
$ sqlite3 /conf/homeserver.db

.tables
.schema events

select * from events where event_id='$LfK8zWi0g_snvqwi93vWGKj-iTf7gxWh_WRhTD5pALc';
select * from event_json where event_id='$LfK8zWi0g_snvqwi93vWGKj-iTf7gxWh_WRhTD5pALc';
select * from event_auth where event_id='$LfK8zWi0g_snvqwi93vWGKj-iTf7gxWh_WRhTD5pALc';
select * from event_auth_chains where event_id='$LfK8zWi0g_snvqwi93vWGKj-iTf7gxWh_WRhTD5pALc';
select * from event_auth_chain_to_calculate where event_id='$LfK8zWi0g_snvqwi93vWGKj-iTf7gxWh_WRhTD5pALc';

Event signing

test-event-signing.py (v1)

test-event-signing.py

from signedjson.key import (
    NACL_ED25519,
    decode_signing_key_base64,
    decode_verify_key_bytes,
    decode_verify_key_base64,
    generate_signing_key,
    get_verify_key,
    is_signing_algorithm_supported,
    read_signing_keys,
    write_signing_keys,
)
from signedjson.sign import (
    sign_json, verify_signed_json, SignatureVerifyException
)

SERVER_NAME = 'hs1'
KEY_ID = 'ed25519:a_imNW';
# This from the signature key_id `ed25519:a_FBvY`
VERSION = 'a_imNW'
SIGNING_KEY_BASE_64 = "vX2BK8l89Qk2EAVcaCiGiTUWG59dwleotTjzqu80C4w"
VERIFY_KEY_BASE_64 = "9d92WUgYwsKY0oWxOR1R61SJar9+D7uvz59IEDEjqyI"
JSON = {'my_key': 'my_data'}
# JSON = {
# 	'auth_events': ['$M7y6Yy1agkTx4eLXpZRndF1BdGk5pvCkjNCZkKlGIdY', '$ZtnAy9W445OtyeqCEk9Dcn6Nr14kYb4z-fJM8MwcVmI', '$gf4M_E3lR3QhumzTKMCncsDrTLZSVjuhcHZ4y6EtJzA', '$mSfCznaiy9xZvR7uQb7NdbNVS-1DNSe7lilB649H78g', '$8WTDdQw82NY8SUWUom53IjM1HzXiaB71oeU1mPjz7XY', '$ywbCJgiKeXtXuhN8OzxFD04-Esj89nxSt5miRxM5f78', '$ooT281Jg3nfzhjdE1rq0hUEOpu8sX1ublFUDYRPuyOk'], 'prev_events': ['$zWKxzzgcSVoLUDAeauzbOXkCTfrfNCcRQwEQToTZpof'],
# 	'type': 'm.room.member',
# 	'room_id': '!aqtKBNDKZSWCDMRpSH:hs1',
# 	'sender': '@maria:hs1',
# 	'content': {'membership': 'join'},
# 	'depth': 1,
# 	'prev_state': [],
# 	'state_key':
# 	'@maria:hs1',
# 	'origin': 'hs1', 
# 	'origin_server_ts': 1620167174120,
# 	'hashes': {'sha256': 'Cw+4O3jZVocrTLhdJ9vfoZq1SRYk4RB+/8X5EQ/xaXU'},
# 	'signatures': {'hs1': {'ed25519:a_imNW': '8/II10y7FO7nMBZLeF6aRMkDAkSjnRsJIu8oPGfG9AEvmvEvusNwe6klWfw7QPB0NfJpF5wyOd5rQrbX7ymcAg'}},
# 	'unsigned': {}
# }


#signing_key = generate_signing_key('zxcvb')
#verify_key = get_verify_key(signing_key)

signing_key = decode_signing_key_base64(NACL_ED25519, VERSION, SIGNING_KEY_BASE_64)
#verify_key = decode_verify_key_bytes(KEY_ID, VERIFY_KEY_BASE_64)
verify_key = decode_verify_key_base64(NACL_ED25519, VERSION, VERIFY_KEY_BASE_64)
verify_key_derived_from_signing_key = get_verify_key(signing_key)
if verify_key != verify_key_derived_from_signing_key:
	print("WARNING: verify_key and verify_key_derived_from_signing_key are different")

print(f"signing_key={signing_key}")
print(f"verify_key={verify_key} verify_key_derived_from_signing_key={verify_key_derived_from_signing_key}")

signed_json = sign_json(JSON, SERVER_NAME, signing_key)


try:
    verify_signed_json(signed_json, SERVER_NAME, verify_key)
    print('Signature is valid')
except SignatureVerifyException:
    print('Signature is invalid')
test-event-signing.py (v2)

test-event-signing.py

import copy
from signedjson.key import (
    NACL_ED25519,
    decode_signing_key_base64,
    decode_verify_key_bytes,
    decode_verify_key_base64,
    generate_signing_key,
    get_verify_key,
    is_signing_algorithm_supported,
    read_signing_keys,
    write_signing_keys,
)
from signedjson.sign import (
    sign_json, verify_signed_json, SignatureVerifyException
)

SERVER_NAME = 'hs1'
KEY_ID = 'ed25519:a_imNW';
# This from the signature key_id `ed25519:a_FBvY`
VERSION = 'a_imNW'
SIGNING_KEY_BASE_64 = "vX2BK8l89Qk2EAVcaCiGiTUWG59dwleotTjzqu80C4w"
VERIFY_KEY_BASE_64 = "9d92WUgYwsKY0oWxOR1R61SJar9+D7uvz59IEDEjqyI"


#signing_key = generate_signing_key('zxcvb')
#verify_key = get_verify_key(signing_key)

signing_key = decode_signing_key_base64(NACL_ED25519, VERSION, SIGNING_KEY_BASE_64)
#verify_key = decode_verify_key_bytes(KEY_ID, VERIFY_KEY_BASE_64)
verify_key = decode_verify_key_base64(NACL_ED25519, VERSION, VERIFY_KEY_BASE_64)
verify_key_derived_from_signing_key = get_verify_key(signing_key)
if verify_key != verify_key_derived_from_signing_key:
	print("WARNING: verify_key and verify_key_derived_from_signing_key are different")

print(f"signing_key={signing_key}")
print(f"verify_key={verify_key} verify_key_derived_from_signing_key={verify_key_derived_from_signing_key}")

#JSON = {'my_key': 'my_data'}
#signed_json = sign_json(JSON, SERVER_NAME, signing_key)
KNOWN_GOOD_JSON = {
	'auth_events': ['$M7y6Yy1agkTx4eLXpZRndF1BdGk5pvCkjNCZkKlGIdY'],
	'prev_events': ['$M7y6Yy1agkTx4eLXpZRndF1BdGk5pvCkjNCZkKlGIdY'],
	'type': 'm.room.member',
	'room_id': '!aqtKBNDKZSWCDMRpSH:hs1',
	'sender': '@the-bridge-user:hs1',
	'content': {'membership': 'join'},
	'depth': 2,
	'prev_state': [],
	'state_key': '@the-bridge-user:hs1',
	'origin': 'hs1',
	'origin_server_ts': 1620167173759,
	'hashes': {'sha256': 'FSyUP1X9VrlCCPztDLooSZSjcWqbUV2T7j7Z3o06zSw'},
	'signatures': {'hs1': {'ed25519:a_imNW': 'CO6WUNuoZb8bg0cH9zoywZWqzEc2YogZsp6jqhISjhIvv/HDMJYf0INlxYpo3m67xcXVYrh1LgeVw4qSRFWPDQ'}},
	'unsigned': {}
}
KNOWN_BAD_JSON = {
	'auth_events': ['$M7y6Yy1agkTx4eLXpZRndF1BdGk5pvCkjNCZkKlGIdY', '$ZtnAy9W445OtyeqCEk9Dcn6Nr14kYb4z-fJM8MwcVmI', '$gf4M_E3lR3QhumzTKMCncsDrTLZSVjuhcHZ4y6EtJzA', '$mSfCznaiy9xZvR7uQb7NdbNVS-1DNSe7lilB649H78g', '$8WTDdQw82NY8SUWUom53IjM1HzXiaB71oeU1mPjz7XY', '$ywbCJgiKeXtXuhN8OzxFD04-Esj89nxSt5miRxM5f78', '$ooT281Jg3nfzhjdE1rq0hUEOpu8sX1ublFUDYRPuyOk'],
	'prev_events': ['$zWKxzzgcSVoLUDAeauzbOXkCTfrfNCcRQwEQToTZpof'],
	'type': 'm.room.member',
	'room_id': '!aqtKBNDKZSWCDMRpSH:hs1',
	'sender': '@maria:hs1',
	'content': {'membership': 'join'},
	'depth': 1,
	'prev_state': [],
	'state_key':
	'@maria:hs1',
	'origin': 'hs1', 
	'origin_server_ts': 1620167174120,
	'hashes': {'sha256': 'Cw+4O3jZVocrTLhdJ9vfoZq1SRYk4RB+/8X5EQ/xaXU'},
	'signatures': {'hs1': {'ed25519:a_imNW': '8/II10y7FO7nMBZLeF6aRMkDAkSjnRsJIu8oPGfG9AEvmvEvusNwe6klWfw7QPB0NfJpF5wyOd5rQrbX7ymcAg'}},
	'unsigned': {}
}
# Switch around KNOWN_GOOD_JSON and KNOWN_BAD_JSON here
signed_json = KNOWN_BAD_JSON


copied_json = copy.deepcopy(signed_json)
double_checked_signed_json = sign_json(copied_json, SERVER_NAME, signing_key)
double_checked_signatures = double_checked_signed_json.get("signatures", {})
double_checked_server_signature = double_checked_signatures.get(SERVER_NAME, {}).get(KEY_ID)
print(f"double_check_signed_json={double_checked_signatures}")
server_signature = signed_json.get("signatures", {}).get(SERVER_NAME, {}).get(KEY_ID)
if double_checked_server_signature != server_signature:
	print(f"WANRING: When we re-signed the object and checked the signatures, they did NOT match!\ndouble_checked_server_signature={double_checked_server_signature}\nserver_signature={server_signature}")


try:
    verify_signed_json(signed_json, SERVER_NAME, verify_key)
    print('Signature is valid')
except SignatureVerifyException:
    print('Signature is invalid')

Todo

  • ts query param to override origin_server_ts
  • prev_event query param
  • Proper depth
  • m.historical event field
  • Add tests within Synapse

Pull Request Checklist

  • Pull request is based on the develop branch
  • Pull request includes a changelog file. The entry should:
    • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
    • Use markdown where necessary, mostly for code blocks.
    • End with either a period (.) or an exclamation mark (!).
    • Start with a capital letter.
  • Pull request includes a sign off
  • Code style is correct (run the linters)

MadLittleMods added a commit to MadLittleMods/tardis that referenced this pull request Feb 2, 2021
Edits to make TARDIS work with Synapse while writing Complement tests for [MSC 2716](matrix-org/matrix-spec-proposals#2716).

 - matrix-org/synapse#9247
 - matrix-org/complement#68
TODO: Is the assumption of anytime we pass in prev_event_ids, we use same depth
good enough? What corner cases are there? I see that we also pass in prev_event_ids this in
synapse/handlers/room_member.py so need to make sure that still work as expected
@MadLittleMods MadLittleMods force-pushed the eric/msc2716-backfilling-history branch from 46625b7 to 9b5e057 Compare February 5, 2021 04:43
@MadLittleMods MadLittleMods requested a review from a team February 5, 2021 06:10
@clokep
Copy link
Member

clokep commented Feb 5, 2021

@MadLittleMods This seems to have some style / CI issues. Were you looking for general feedback or were you hoping to get this merged?

@MadLittleMods
Copy link
Contributor Author

MadLittleMods commented Feb 5, 2021

@clokep Some general comments and answer questions in the PR. What tests and where to add, etc?

What's more needed content wise to merge. Outside are a few lints to clean up.

Behind feature flag or unstable build type of thing?

old_depth = await self._store.get_max_depth_of(prev_event_ids)
depth = old_depth + 1
# Otherwise, progress the depth as normal
if depth is None:
Copy link
Contributor Author

@MadLittleMods MadLittleMods Jun 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last test failures are coming from this new line (if I comment it out to always calculate depth, the tests pass). Is there a way to see logger.info from within the app while running the twisted.trial tests? I am having digging into this further without being able to log.

Test failure: https://github.com/matrix-org/synapse/pull/10049/checks?check_run_id=2764607442#step:8:2373

Reproduction locally:

python -m twisted.trial tests.handlers.test_presence.PresenceJoinTestCase.test_remote_gets_presence_when_local_user_joins

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting the env to SYNAPSE_TEST_LOG_LEVEL=DEBUG will give you debug logs in _trial_temp/test.log 🙂

Copy link
Contributor Author

@MadLittleMods MadLittleMods Jun 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like I accidentally set depth to False in the tests.

Found the problem first by logging what the weird depth value was above the line in question and saw it was False at some point:

logger.info("depth1=%s", depth)

Then I added this line which will cause it to throw an exception (because no %s in the formatting string)when this condition is met and give us a full stack trace.

if depth == False:
    logging.info("Why is depth False?", depth)
$ SYNAPSE_TEST_LOG_LEVEL=DEBUG python -m twisted.trial tests.handlers.test_presence.PresenceJoinTestCase.test_remote_gets_presence_when_local_user_joins
tests.handlers.test_presence
  PresenceJoinTestCase
    test_remote_gets_presence_when_local_user_joins ...                  [FAIL]

===============================================================================
[FAIL]
Traceback (most recent call last):
  File "/Users/eric/Documents/github/element/synapse/tests/handlers/test_presence.py", line 815, in test_remote_gets_presence_when_local_user_joins
    self._add_new_user(room_id, "@alice:server2")
-  File "/Users/eric/Documents/github/element/synapse/tests/handlers/test_presence.py", line 866, in _add_new_user
-    event = self.get_success(builder.build(prev_event_ids, None, False))
  File "/Users/eric/Documents/github/element/synapse/tests/unittest.py", line 500, in get_success
    return self.successResultOf(d)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/trial/_synctest.py", line 706, in successResultOf
    self.fail(
twisted.trial.unittest.FailTest: Success result expected on <Deferred at 0x10de9a3a0 current result: None>, found failure result instead:
Traceback (most recent call last):
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 477, in callback
    self._startRunCallbacks(result)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 580, in _startRunCallbacks
    self._runCallbacks()
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 662, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 1514, in gotResult
    current_context.run(_inlineCallbacks, r, g, status)
--- <exception caught here> ---
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 1445, in _inlineCallbacks
    result = current_context.run(g.send, result)
  File "/Users/eric/Documents/github/element/synapse/synapse/events/builder.py", line 142, in build
    logging.info("Why is depth False?", depth)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 2070, in info
    root.info(msg, *args, **kwargs)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 1434, in info
    self._log(INFO, msg, args, **kwargs)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 1577, in _log
    self.handle(record)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 1587, in handle
    self.callHandlers(record)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 1649, in callHandlers
    hdlr.handle(record)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 950, in handle
    self.emit(record)
  File "/Users/eric/Documents/github/element/synapse/tests/test_utils/logging_setup.py", line 28, in emit
    log_entry = self.format(record)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 925, in format
    return fmt.format(record)
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 664, in format
    record.message = record.getMessage()
  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/logging/__init__.py", line 369, in getMessage
    msg = msg % self.args
builtins.TypeError: not all arguments converted during string formatting

Is there a better way to get a stack trace/call stack? I tried logger.exception and adding exc_info=True to the logger functions but they just gave small trace back to the defer and don't play well with the async stuff.

2021-06-09 01:56:40-0500 [-] 2021-06-09 01:56:40,722 - root - 142 - INFO - sentinel - Why is depth False?
	Traceback (most recent call last):
	  File "/Users/eric/.pyenv/versions/3.8.6/lib/python3.8/site-packages/twisted/internet/defer.py", line 1445, in _inlineCallbacks
	    result = current_context.run(g.send, result)
	StopIteration: _GetStateGroupDelta(prev_group=5, delta_ids={('m.room.history_visibility', ''): '$6KwfF_Su1zGCDsZn4FvqkfkcPqI2CnPn_-Uue7pXRCc'})

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documented the SYNAPSE_TEST_LOG_LEVEL=DEBUG( -> _trial_temp/test.log) trick in #10148. Thanks for sharing!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you raise if depth is wrong? That gives me a good stack trace. I think the defer code does special things to make the stack traces work out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raise does work well 👍 Thanks for the better way to do this!

Is there a way I can just log a line and get a stack trace without breaking the normal execution flow?

In JavaScript, I would use console.log('something bad happened', new Error().stack);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think creating a twisted Failure object might work: https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/pagination.py#L285-L288, but I'm not 100% sure

Conflicts:
	synapse/config/experimental.py
	synapse/handlers/room_member.py
Copy link
Member

@erikjohnston erikjohnston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think once we've fixed up the paths we should be done 🙂

@erikjohnston
Copy link
Member

On the complemenet tests: you may need to merge in latest develop into this branch and the same named branch on complement repo (or delete that branch)? The tests failures look to be mostly for things like spaces and knocking, which have recently had work done on them?

Copy link
Member

@erikjohnston erikjohnston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@MadLittleMods
Copy link
Contributor Author

Woot! Thank you @erikjohnston for all the help guiding this along and review to push this to the right places! 🐗

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

T-Enhancement New features, changes in functionality, improvements in performance, or user-facing enhancements.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants