🐛 Describe the bug
When Memory._add_to_vector_store() in main.py determines a memory is unchanged (NONE event, ~line 578), it calls:
self.vector_store.update(
vector_id=memory_id,
vector=None, # no new vector, content didn't change
payload=updated_metadata,
)
In valkey.py, the update() method unconditionally serializes the vector:
"embedding": np.array(vector, dtype=np.float32).tobytes(),
When vector is None, np.array(None, dtype=np.float32) produces a single float32 scalar (4 bytes), which overwrites the correct embedding (e.g. 4096 bytes for a 1024-dim model). The memory's text content remains intact, but it becomes permanently unsearchable via vector similarity because the embedding dimensions no longer match the index schema.
Steps to reproduce:
- A memory is stored (correct embedding written)
- The same fact is encountered again in a subsequent add() call
- The LLM returns a NONE action (content unchanged)
- update() is called with vector=None to update session metadata
- The embedding is overwritten with 4 bytes of garbage
🐛 Describe the bug
When Memory._add_to_vector_store() in main.py determines a memory is unchanged (NONE event, ~line 578), it calls:
In valkey.py, the update() method unconditionally serializes the vector:
"embedding": np.array(vector, dtype=np.float32).tobytes(),When vector is None, np.array(None, dtype=np.float32) produces a single float32 scalar (4 bytes), which overwrites the correct embedding (e.g. 4096 bytes for a 1024-dim model). The memory's text content remains intact, but it becomes permanently unsearchable via vector similarity because the embedding dimensions no longer match the index schema.
Steps to reproduce: