Skip to content

Add Sudoku protocol inbound & outbound support #2397

Merged
wwqgtxx merged 9 commits intoMetaCubeX:Alphafrom
saba-futai:Alpha
Nov 28, 2025
Merged

Add Sudoku protocol inbound & outbound support #2397
wwqgtxx merged 9 commits intoMetaCubeX:Alphafrom
saba-futai:Alpha

Conversation

@saba-futai
Copy link

@saba-futai saba-futai commented Nov 28, 2025

This PR integrates the Sudoku protocol into mihomo as both an outbound proxy type and an inbound listener type.

Config Examples

Outbound:

proxies:
  - name: sudoku-out
    type: sudoku
    server: 1.2.3.4
    port: 8443
    key: "27efbf96-8f28-4090-8bc8-d35a379d76ee"
    aead-method: none
    padding-min: 1
    padding-max: 10
    table-type: prefer_ascii   # or prefer_entropy

Inbound:

listeners:
  - name: sudoku-in
    type: sudoku
    listen: 0.0.0.0
    port: 8443
    key: "27efbf96-8f28-4090-8bc8-d35a379d76ee"
    aead-method: none
    padding-min: 1
    padding-max: 9
    table-type: prefer_ascii
    handshake-timeout: 5       # seconds, optional

Notes / Limitations

    • Sudoku inbound currently implements the “standard” mode only, here stantard refers to pure sudoku
    • Uses apis.ServerHandshake (HTTP mask + Sudoku obfs + AEAD + timestamp + address) as upstream.
    • Does not implement sudoku-main’s fallback-to-decoy or Mieru Split/Hybrid mode.
    • Sudoku outbound uses our own dialer but keeps the on-wire protocol fully compatible with the original project.

Testing

go build ./... (pass)
go test ./adapter/... ./listener/... ./constant (pass)
Tested local and server inbound/outbound, finding successful proxy

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

NEVER modify the Go version; we need to maintain Go 1.20 compatibility.

@saba-futai
Copy link
Author

I have downgraded and checked go mod tidy -go=1.20 , things all fine

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

You need to add a test to inbound like in listener/inbound/*_test.go to ensure that the inbound/outbound tests pass correctly.

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

In addition, other dependencies in go.mod should not be downgraded.

@saba-futai
Copy link
Author

added and passed, log as:

--- PASS: TestInboundSudoku_Basic (1.43s)
    --- PASS: TestInboundSudoku_Basic/Sequential (0.04s)
    --- PASS: TestInboundSudoku_Basic/Concurrent (1.02s)
--- PASS: TestInboundSudoku_Entropy (1.47s)
    --- PASS: TestInboundSudoku_Entropy/Sequential (0.35s)
    --- PASS: TestInboundSudoku_Entropy/Concurrent (0.71s)
--- PASS: TestInboundSudoku_Padding (1.48s)
    --- PASS: TestInboundSudoku_Padding/Sequential (0.22s)
    --- PASS: TestInboundSudoku_Padding/Concurrent (0.89s)
PASS
ok      github.com/metacubex/mihomo/listener/inbound    2.157s

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

=== RUN   TestInboundSudoku_Basic/Concurrent
panic: runtime error: slice bounds out of range [:265] with capacity 0

goroutine 686236 [running]:
github.com/saba-futai/sudoku/pkg/obfs/sudoku.(*Conn).Read(0xc00acfe480, {0xc00e99650a, 0x2, 0x476d29?})
	/home/runner/go/pkg/mod/github.com/saba-futai/sudoku@v0.0.1-b/pkg/obfs/sudoku/conn.go:185 +0x5cd
io.ReadAtLeast({0x7f2b383f05c8, 0xc00acfe480}, {0xc00e99650a, 0x2, 0x2}, 0x2)
	/opt/hostedtoolcache/go/1.24.10/x64/src/io/io.go:335 +0x91
io.ReadFull(...)
	/opt/hostedtoolcache/go/1.24.10/x64/src/io/io.go:354
github.com/saba-futai/sudoku/pkg/crypto.(*AEADConn).Read(0xc0120f1e00, {0xc007cd5500, 0x3500, 0xc0025fa7b8?})
	/home/runner/go/pkg/mod/github.com/saba-futai/sudoku@v0.0.1-b/pkg/crypto/aead.go:108 +0x191
github.com/metacubex/mihomo/common/net/deadline.(*Conn).Read(0xc00bdcc700, {0xc007cd5500, 0x3500, 0x3500})
	/home/runner/work/mihomo/mihomo/common/net/deadline/conn.go:70 +0x2be
crypto/tls.(*atLeastReader).Read(0xc015675d10, {0xc007cd5500?, 0x34fb?, 0x7f2b388257b8?})
	/opt/hostedtoolcache/go/1.24.10/x64/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc002749eb8, {0x12c4060, 0xc015675d10})
	/opt/hostedtoolcache/go/1.24.10/x64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc002749c08, {0x7f2b38a02898, 0xc00bdcc740}, 0x43c454?)
	/opt/hostedtoolcache/go/1.24.10/x64/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc002749c08, 0x0)
	/opt/hostedtoolcache/go/1.24.10/x64/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/opt/hostedtoolcache/go/1.24.10/x64/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc002749c08, {0xc007c57000, 0x1000, 0xc000dcd6c0?})
	/opt/hostedtoolcache/go/1.24.10/x64/src/crypto/tls/conn.go:1385 +0x145
bufio.(*Reader).Read(0xc008e032c0, {0xc000000820, 0x9, 0x1aec1a0?})
	/opt/hostedtoolcache/go/1.24.10/x64/src/bufio/bufio.go:245 +0x197
io.ReadAtLeast({0x12c3520, 0xc008e032c0}, {0xc000000820, 0x9, 0x9}, 0x9)
	/opt/hostedtoolcache/go/1.24.10/x64/src/io/io.go:335 +0x91
io.ReadFull(...)
	/opt/hostedtoolcache/go/1.24.10/x64/src/io/io.go:354
net/http.http2readFrameHeader({0xc000000820, 0x9, 0xc010e32240?}, {0x12c3520?, 0xc008e032c0?})
	/opt/hostedtoolcache/go/1.24.10/x64/src/net/http/h2_bundle.go:1805 +0x65
net/http.(*http2Framer).ReadFrame(0xc0000007e0)
	/opt/hostedtoolcache/go/1.24.10/x64/src/net/http/h2_bundle.go:2072 +0x7d
net/http.(*http2clientConnReadLoop).run(0xc00353dfa8)
	/opt/hostedtoolcache/go/1.24.10/x64/src/net/http/h2_bundle.go:9933 +0xda
net/http.(*http2ClientConn).readLoop(0xc008aff6c0)
	/opt/hostedtoolcache/go/1.24.10/x64/src/net/http/h2_bundle.go:9812 +0x79
created by net/http.(*http2Transport).newClientConn in goroutine 686235
	/opt/hostedtoolcache/go/1.24.10/x64/src/net/http/h2_bundle.go:8334 +0xde5

@saba-futai
Copy link
Author

saba-futai commented Nov 28, 2025

In the (HTTP/2 + TLS + Sudoku) test, the HTTP client's read loop runs concurrently with connection closure. http2ClientConn.readLoop is still calling Read, while the other side has already called Close and set rawBuf to nil. This leads to a race condition where the read returns nr > 0, but sc.rawBuf is already nil, causing the slice expression sc.rawBuf[:nr] to panic.


Why mieru got test failed this time?

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

In fact, the use of sc.rawBuf in your code is unsafe during concurrent Close calls.

Firstly, it's not locked or atomic, leading to read/write race issues.

Secondly, sc.rawBuf is returned to the pool directly in Close, meaning it might still be read in Read. If another Conn gets that byte, it can cause unpredictable and serious errors.

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

mieru's error is likely due to a coding mistake by themselves, and is unrelated to this PR, so it can be ignored.

@saba-futai
Copy link
Author

saba-futai commented Nov 28, 2025

I haven't encountered any concurrency-related panics in all my time using the sudoku protocol. I intentionally omitted a mutex for performance reasons. Do you think adding atomic operations is strictly necessary? If so, I will add them in v0.0.1-d and ensure the tests pass.and no need to commit again, just modify the go.mod

but i think its not ncsr

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

Frankly, I don't think your code needs to hold sc.rawBuf in sudoku.Conn. Getting it from the pool every time you read it won't incur much performance overhead, and there won't be any asynchronous return issues.

The fact that you haven't encountered problems in your personal tests is most likely due to high-concurrency scenarios.

Furthermore, your pool isn't shared with mihomo's global pool; otherwise, you would have discovered the serious problems caused by asynchronous buf returns much faster.

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

Adding a lock in v0.0.1-d only prevents sc.rawBuf from traversing itself, but it still cannot prevent the buf from being manipulated in Read after it has been returned to the pool.

@saba-futai
Copy link
Author

saba-futai commented Nov 28, 2025

well let me handle this tomorrow, meanwhile update my repo, thx for opinion

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

how about this ver

LGTM

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

Besides code issues, I also want to know about the stability of the Sudoku protocol.

Traditionally, we don't easily make break changes. Therefore, we want to know if the protocol added in this PR is stable enough, i.e., whether incompatible changes will occur later.

Our strategy is that new changes should be able to be enabled or disabled via options, rather than directly breaking the old implementation.

@saba-futai
Copy link
Author

saba-futai commented Nov 28, 2025

This protocol has been "dogfooding" for a long time by myself and my peers before open-sourcing. It has proven stable for bypassing censorship.
I strictly follow backward compatibility principles. No breaking changes are planned.
Since the protocol is niche, integrating with Mihomo is the crucial step to get the community feedback needed to mature it further.
I mean dont worry

@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

Finally, please add the configuration example from the PR description to docs/config.yaml.

@wwqgtxx wwqgtxx merged commit 6cf1743 into MetaCubeX:Alpha Nov 28, 2025
@wwqgtxx
Copy link
Collaborator

wwqgtxx commented Nov 28, 2025

There's a minor issue that doesn't affect usability: the timing and log calls in NewTable should be moved to the mihomo side so that they can work with our log system.

@saba-futai
Copy link
Author

well ill adjust it on 30th evening with a tiny feature updated first on my repo.
there wont be a release soon right?

@muink
Copy link

muink commented Dec 1, 2025

Do we need to add a sudoku key generator?
https://github.com/MetaCubeX/mihomo/blob/Alpha/component/generator/cmd.go#L18

@saba-futai
Copy link
Author

saba-futai commented Dec 1, 2025

Do we need to add a sudoku key generator? https://github.com/MetaCubeX/mihomo/blob/Alpha/component/generator/cmd.go#L18

ok ,let me add it in #2407

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants