![]() |
![]() |
OpenSquare Network aim to build a blockchain based collaboration and reputation platform, while it will support funders and hunters to collaborate on bounties, with the Council to deal with the disputes.
Users' reputation and skill proof will be built by their daily activities.
this project need rust tool chain, install rust from this link
if you ensure rust tool chain is ready, do follow to init environment.
- Dependencies installation
Check substrate.dev to install necessary dependencies.
- Init environment
> git clone https://github.com/opensquare-network/opensquare.git > cd opensquare/scripts > bash init.sh
- Compile:
> WASM_BUILD_TOOLCHAIN=nightly-2020-10-06 cargo build > # or we advice you to set `WASM_BUILD_TOOLCHAIN=nightly-2020-10-06` in your global environment variables, so that you do not need to put every time before `cargo` > # if you need build release type, do follow > WASM_BUILD_TOOLCHAIN=nightly-2020-10-06 cargo build --release
- Run:
following parameters means:
--dev: means start with dev mode, would provide default private key to start the chain, and running a independent node.-d <directory>: means blockchain database stored in the<directory>, if you what to clear all data, just stop the node and delete this directory. If you do not use this parameter, the node would use default directory to store data.--execution=<STRATEGY>: substrate provideNativeandWASMexecution strategy. for test and develop, we suggest to useNative
you could launch node with following commands:
-
run dev mode:
dev mode provide default private key
Aliceto run a single node./target/debug/opensquare --dev -d .sub --execution=Native
-
local testnet mode:
-
run tow nodes in a single machine
run alice:
./target/debug/opensquare --chain=local -d .alice --name=alice --alice --execution=Native
run bob
./target/debug/opensquare --chain=local -d .bob --name=bob --bob --execution=Native
-
run tow nodes in different machine under a LAN run alice in one machine:
./target/debug/opensquare --chain=local -d .alice --name=alice --alice --execution=Native --ws-external --rpc-external --rpc-cors=all
notice if not point
--port, it would use default30333as p2p port.and lookup logs, could find a log like
Local node identity is: 12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb (legacy representation: QmUaXtahadUKyosAnpdefPRdxM3CkHeb9uh6QZW6hNQcPz)
pick up the identity
12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wband assemble it like:/ip4/<alice run machine ip in LAN>/tcp/<alice node p2p port>/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb # like # /ip4/192.168.122.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb
this string would be the bootnode for bob.
and run bob in another machine:
./target/debug/opensquare --chain=local -d .bob --name=bob --bob --execution=Native --ws-external --rpc-external --rpc-cors=all --bootnode=<bootnode above>
bootnode could get from another way: call rpc method
system_localListenAddressesfor alice, and would list like:{ "jsonrpc": "2.0", "result": [ "/ip4/127.0.0.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/10.0.0.9/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/192.168.122.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/172.17.0.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip6/::1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/127.0.0.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/10.0.0.9/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/192.168.122.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip4/172.17.0.1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb", "/ip6/::1/tcp/30333/p2p/12D3KooWC6ojeA28QDf2GBupCWbqsc2W8JwFUy6GW9Zjwoppz1wb" ], "id": 100 }just pick up suitable bootnode for bob.
-
Please refer to this google doc to compose a basic user story.
This project is supported by a Web3 Foundation grant.
We would also like to thank the following teams for their support:

