Introduction
This documentation covers the Rusty Kaspa framework — Rust infrastructure and WASM bindings that allow this framework to be used in JavaScript and TypeScript environments.
Please contribute! This mdbook
is easy to edit. If you’d like to suggest changes or add content, check the Contributing page.
Technologies
Rusty Kaspa is built using multi-platform Rust crates (libraries). The Rust framework is exposed to JavaScript and TypeScript via the WASM compilation target, producing the WASM SDK. Additionally, the Rust framework can be used by native applications targeting platforms such as Windows, Linux, and macOS.
The WASM SDK is compatible with all major browsers, Node.js, Bun, and environments like NWJS and Electron. It is also compatible with Chrome Extension Manifest Version 3, making it suitable for use in Chrome browser extensions.
Discord
For help, visit the Kaspa Discord server and join the #development
channel.
Git
The Rusty Kaspa framework is available on the Rusty Kaspa GitHub repository.
Documentation
- Rusty Kaspa README contains build instructions for the Rusty Kaspa framework.
- WASM SDK README contains build instructions for the WASM SDK as well as instructions on running WASM SDK examples.
WASM SDK
TypeScript and Rust API documentation is available at the following URLs:
- WASM TypeScript documentation (TypeDoc is built from development releases)
Rust crates
Rust documentation is broken into multiple crates (modules) comprising the Rusty Kaspa framework. Each crate is documented separately and can be accessed using links below.
Client-side framework
These crates are available to client-side applications.
General
- kaspa-consensus-client - client-side transaction primitives
- kaspa-consensus-core - consensus primitives
- kaspa-consensus-wasm - consensus primitives for WASM
- kaspa-hashes - hash functions
- kaspa-math - math functions
- kaspa-metrics-core - performance metrics data
- kaspa-notify - notification subsystem
- kaspa-pow - proof-of-work primitives
- kaspa-txscript - transaction scripts
- kaspa-txscript-errors - transaction script errors
- kaspa-utils - general utilities and trait extensions
Wallet
- kaspa-addresses - address management
- kaspa-bip32 - BIP32 & BIP39 implementation
- kaspa-wallet-keys -
secp256k1
key management - kaspa-wallet-pskt - PSKT (Partially Signed Kaspa Transactions)
- kaspa-wallet-core - Wallet SDK & core wallet implementation
- kaspa-wallet-macros - Wallet macros
RPC
- kaspa-grpc-client - gRPC client
- kaspa-grpc-core - gRPC core (used by client and server)
- kaspa-rpc-core - Kaspa Node RPC API
- kaspa-rpc-macros - RPC macros
- kaspa-wrpc-client - wRPC client
- kaspa-wrpc-wasm - wRPC WASM client
WASM
- kaspa-wasm - WASM SDK
- kaspa-wasm-core - Base WASM type declarations
Applications
- kaspa-cli - Kaspa command line RPC interface and wallet
Rusty Kaspa Node framework
These crates comprize the Rusty Kaspa node framework. They can be used in Rust native applications only.
Consensus & p2p
- kaspa-addressmanager - p2p address management
- kaspa-alloc - memory allocator
- kaspa-connectionmanager - p2p connection management
- kaspa-consensus-notify - consensus notifications
- kaspa-consensus - consensus
- kaspa-consensusmanager - consensus
- kaspa-core - node runtime management
- kaspa-database - database
- kaspa-index-core - UTXO indexing
- kaspa-index-processor - UTXO indexing
- kaspa-merkle - Merkle tree processing
- kaspa-mining - PoW algorithms
- kaspa-mining-errors - PoW errors
- kaspa-muhash - MuHash
- kaspa-p2p-flows - p2p message flows
- kaspa-p2p-lib - p2p library
- kaspa-perf-monitor - performance monitoring
- kaspa-utils-tower - gRPC tower middleware
- kaspa-utxoindex - UTXO indexing
RPC
- kaspa-grpc-server - gRPC server
- kaspa-rpc-service - Kaspa RPC service
- kaspa-wrpc-server - wRPC server
Applications
- kaspad - Kaspa p2p node daemon
Building from source
Please follow up to date build instructions in the Rusty Kaspa README
Docker
Rusty Kaspa images based on Alpine linux can be found here:
- Docker build scripts: https://github.com/supertypo/docker-rusty-kaspa
- Published images: https://hub.docker.com/r/supertypo/rusty-kaspad
Running Rusty Kaspa p2p Node
A typical execution arguments for a mainnet
Rusty Kaspa p2p node are as follows:
cargo run --release -- --utxoindex --disable-upnp --maxinpeers=64 --perf-metrics --outpeers=32 --yes --perf-metrics-interval-sec=1 --rpclisten=127.0.0.1:16110 --rpclisten-borsh=127.0.0.1:17110 --rpclisten-json=127.0.0.1:18110
To configure the node for testnet
, simply add the --testnet
flag.
This will power up the node connecting it to the currently default testnet-10
network.
To connect to an alternate testnet network, use the --testnet
flag followed by --netsuffix=<id>
where the <id>
is the testnet id. For example, to connect to testnet-11
, use --testnet --netsuffix=11
.
Please see the RPC ports section for more information on RPC port selection.
UTXO Index
UTXO Index is an auxiliary database that enables the Kaspa node to perform additional tracking of transaction addresses. This allows you to setup notifications for when a new transaction matching your addresses has been detected.
If UTXO Index is not enabled, RPC calls requesting UTXO By Addresses information will result in an error.
To enable UTXO Index run the node with the --utxoindex
command-line argument.
Systemd Service
[Unit]
Description=Kaspad p2p Node (mainnet)
[Service]
User=aspect
ExecStart=/home/user/rusty-kaspa/target/release/kaspad --utxoindex --disable-upnp --maxinpeers=64 --perf-metrics --outpeers=32 --yes --perf-metrics-interval-sec=1 --rpclisten=127.0.0.1:16110 --rpclisten-borsh=127.0.0.1:17110 --rpclisten-json=127.0.0.1:18110
RestartSec=5
Restart=on-failure
[Install]
WantedBy=multi-user.target
Command line arguments
Kaspa full node daemon (rusty-kaspa) v0.15.2
Usage: kaspad [OPTIONS]
Options:
-C, --configfile <CONFIG_FILE>
Path of config file.
-b, --appdir <DATA_DIR>
Directory to store data.
--logdir <LOG_DIR>
Directory to log output.
--nologfiles
Disable logging to files.
-t, --async-threads=<async_threads>
Specify number of async threads (default: 10).
-d, --loglevel=<LEVEL>
Logging level for all subsystems {off, error, warn, info, debug, trace}
-- You may also specify <subsystem>=<level>,<subsystem2>=<level>,... to set the log level for individual subsystems. [default: info]
--rpclisten[=<IP[:PORT]>]
Interface:port to listen for gRPC connections (default port: 16110, testnet: 16210).
--rpclisten-borsh[=<IP[:PORT]>]
Interface:port to listen for wRPC Borsh connections (default port: 17110, testnet: 17210).
--rpclisten-json[=<IP[:PORT]>]
Interface:port to listen for wRPC JSON connections (default port: 18110, testnet: 18210).
--unsaferpc
Enable RPC commands which affect the state of the node
--connect=<IP[:PORT]>
Connect only to the specified peers at startup.
--addpeer=<IP[:PORT]>
Add peers to connect with at startup.
--listen=<IP[:PORT]>
Add an interface:port to listen for connections (default all interfaces port: 16111, testnet: 16211).
--outpeers=<outpeers>
Target number of outbound peers (default: 8).
--maxinpeers=<maxinpeers>
Max number of inbound peers (default: 128).
--rpcmaxclients=<rpcmaxclients>
Max number of RPC clients for standard connections (default: 128).
--reset-db
Reset database before starting node. It's needed when switching between subnetworks.
--enable-unsynced-mining
Allow the node to accept blocks from RPC while not synced (this flag is mainly used for testing)
--utxoindex
Enable the UTXO index
--max-tracked-addresses=<max-tracked-addresses>
Max (preallocated) number of addresses being tracked for UTXO changed events (default: 0, maximum: 14680063).
Setting to 0 prevents the preallocation and sets the maximum to 1835007, leading to 0 memory footprint as long as unused but to sub-optimal footprint if used.
--testnet
Use the test network
--netsuffix=<netsuffix>
Testnet network suffix number
--devnet
Use the development test network
--simnet
Use the simulation test network
--archival
Run as an archival node: avoids deleting old block data when moving the pruning point (Warning: heavy disk usage)
--sanity
Enable various sanity checks which might be compute-intensive (mostly performed during pruning)
--yes
Answer yes to all interactive console questions
--uacomment=<user_agent_comments>
Comment to add to the user agent -- See BIP 14 for more information.
--externalip=<externalip>
Add a socket address(ip:port) to the list of local addresses we claim to listen on to peers
--perf-metrics
Enable performance metrics: cpu, memory, disk io usage
--perf-metrics-interval-sec=<perf-metrics-interval-sec>
Interval in seconds for performance metrics collection.
--disable-upnp
Disable upnp
--nodnsseed
Disable DNS seeding for peers
--nogrpc
Disable gRPC server
--ram-scale=<ram-scale>
Apply a scale factor to memory allocation bounds. Nodes with limited RAM (~4-8GB) should set this to ~0.3-0.5 respectively. Nodes with
a large RAM (~64GB) can set this value to ~3.0-4.0 and gain superior performance especially for syncing peers faster
-h, --help
Print help
-V, --version
Print version
Integrating
Loading into a Web App
Loading in a Web Browser requires an import of the JavaScript module and an async await for a bootstrap handler as follows.
Example
<html>
<head>
<script type="module">
import * as kaspa_wasm from './kaspa/kaspa-wasm.js';
(async () => {
const kaspa = await kaspa_wasm.default('./kaspa/kaspa-wasm_bg.wasm');
// you are now ready to use the kaspa object
// print the version of WASM SDK into the browser console
console.log(kaspa.version());
})();
</script>
</head>
<body></body>
</html>
Loading into a Node.js App
For Node.js, Kaspa WASM SDK is available as a regular common-js module that can be loaded using the require()
function.
Example
// W3C WebSocket module shim
// (not needed in a browser or Bun)
// @ts-ignore
globalThis.WebSocket = require('websocket').w3cwebsocket;
let kaspa = require('./kaspa');
let { RpcClient, Resolver } = kaspa;
kaspa.initConsolePanicHook();
const rpc = new RpcClient({
// url : "127.0.0.1",
resolver: new Resolver(),
networkId : "mainnet",
});
(async () => {
await rpc.connect();
console.log(`Connected to ${rpc.url}`)
const info = await rpc.getBlockDagInfo();
console.log("GetBlockDagInfo response:", info);
await rpc.disconnect();
})();
The Node.js WebSocket shim
To use WASM RPC client in the Node.js environment, you need to introduce a W3C WebSocket-compatible object
before using the RpcClient
class. You can use any Node.js module that exposes a W3C-compatible
WebSocket implementation. The recommended package that implements this functionality is
WebSocket.
Add the following to your package.json
:
"dependencies": {
"websocket": "1.0.34",
}
Following that, you can use the following shim:
// WebSocket
globalThis.WebSocket = require('websocket').w3cwebsocket;
Security Considerations
The WASM SDK binaries are built directly from the Rust project codebase. The WASM SDK provides all the necessary primitives to interact with Kaspa from NodeJs or browser environments.
Using WASM SDK
To load the WASM SDK, you can use the “kaspa” or “kaspa-wasm” NPM modules. However, for security-critical applications, it is recommended to either build the WASM SDK from the Rust source code yourself or obtain prebuilt binaries and embed them directly into your project.
NPM Versioning
For security-focused applications, any third-party JavaScript Node.js module dependencies should be treated as potentially insecure due to a variety of attack vectors, such as code injection vulnerabilities.
If you must use NPM dependencies, it is crucial to manually review all dependencies and set a fixed version of the dependency, including the patch number. This approach helps prevent unexpected updates when new versions of dependencies are published on NPM.
Manually reviewing and embedding dependencies directly into your project or into a library your project relies on is another excellent option to reduce the risks associated with dependency updates.
Serving
It is highly advisable to serve WASM libraries, as well as other cryptocurrency application components, from a server controlled by you. Serving WASM libraries from a CDN or third-party servers can expose your application to potential attacks, such as code injection.
Using Subresource Integrity
When loading WASM or your own scripts via the <script>
tag, you can specify an integrity
hash for the target resources to ensure that the resource hasn't been tampered with.
<script
src="https://example.com/example-framework.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
crossorigin="anonymous"></script>
Using Subresource Integrity (SRI) helps protect your application by verifying that resources loaded from third parties match the expected content.
Source: Subresource integrity at Mozilla Developer
Building from Source
Instructions for building WASM SDK from source can be found in the Rusty Kaspa repository WASM SDK README.
Redistributables
WASM redistributables are available prebuilt for web browsers and for nodejs. The entire framework can also be built from Rusty Kaspa sources (into WASM) or used within Rust directly for Rust-based application integration.
You can download the latest stable version of the WASM SDK from: https://github.com/kaspanet/rusty-kaspa/releases
Development releases
Developers periodically release updates to the WASM SDK. These updates are available for download from the following links:
If you are developing an application that uses the WASM SDK, you can use the development releases to stay up-to-date with the latest features and bug fixes.
Examples
WASM SDK
For information on running WASM SDK example please check the following WASM SDK README sections:
Example sources
WASM SDK examples can be found at https://github.com/kaspanet/rusty-kaspa/tree/master/wasm/examples/
The following URL is the ideal starting point for WASM SDK examples: https://github.com/kaspanet/rusty-kaspa/tree/master/wasm/examples/nodejs/javascript
Rust SDK
Rust SDK is much more complex since the entire Rusty Kaspa project is written in Rust. As such, the entire framework can serve as a reference.
A few key points can be found here:
- wRPC client examples - examples on connecting to a Kaspa node using wRPC and executing RPC methods as well as subscribing to and processing notifications.
- PSKT multisig example - a multisig example that uses PSKT primitives.
- Kaspa Cli - a project that uses the Wallet subsystem as well as the RPC subsystem.
- Kaspa NG - Kaspa NG is built using Rusty Kaspa SDK and is a good example of a full-fledged Rust application built using Rusty Kaspa.
Explorers
Kaspa DAG explorers for different networks are available at these URLs:
- Mainnet: https://explorer.kaspa.org
- Testnet 10: https://explorer-tn10.kaspa.org
- Testnet 11: https://explorer-tn11.kaspa.org
Explorer Rest API
REST API documentation for Kaspa explorers can be found here:
Faucets
Faucets provide developers to quickly obtain testnet coins for development and testing purposes. The following testnet faucets are available for the Kaspa network:
Testnet 10
https://faucet-testnet.kaspanet.io
Testnet 11
https://faucet-t11.kaspanet.io
CPU miners
The following high-performance CPU miner is available for mining on the Kaspa testnets:
https://github.com/elichai/kaspa-miner
Applications
The Kaspa ecosystem contains a number of applications that are useful for testing during development.
Please note that applications using the Wallet API can interoperate with each other (i.e., they can open the same wallet files). There are currently two wallets based on the Wallet API:
Kaspa CLI
Kaspa CLI is a terminal interface that includes the Kaspa Wallet and can be used to execute various RPC commands against a node.
Installation
cargo install kaspa-cli
kaspa-cli
Once running, you can type help
or settings
commands.
By default, kaspa-cli
connects to the Public Node Network (PNN). You can use server <wRPC-url>
to change the RPC endpoint and network
to change the network type.
Kaspa NG
Kaspa NG is an interactive, multi-platform application developed in Rust on top of the Rusty Kaspa framework. It can be run as a desktop wallet or a web wallet. Kaspa NG incorporates the Rusty Kaspa p2p node when running as a desktop application and can also connect to PNN.
When running a local Rusty Kaspa p2p node, other applications using wRPC can connect to it locally.
In addition to the wallet, Kaspa NG also includes node performance metrics tracking and a BlockDAG visualizer.
Releases
- Desktop binary releases on the Kaspa NG GitHub repository
- The web instance can be accessed at https://kaspa-ng.org
3rd Party Protocols
Kaspa network does not currently have a Smart Contract solution, however, the network can be used as a sequencer for 3rd party protocols. These protocols can be used to build various 3rd-party application infrastructures on top of the Kaspa network.
As of the time of writing, the following 3rd party protocols are available for Kaspa:
Kasplex KRC-20
Kasplex protocol is a KRC-20 token standard that can be used to build tokenized assets on top of the Kaspa network.
Website
Documentation
Examples
RPC
Rusty Kaspa integrates support for the following RPC protocols:
- gRPC (streaming gRPC, protobuf encoding)
- wRPC (WebSocket-framed, JSON-RPC-like encoding)
- wRPC (WebSocket-framed, high-performance Borsh encoding)
For comparison of these protocols and client documentation, please see the Protocols and Clients sections.
IMPORTANT
When subscribing to event notifications against the Kaspa node, the subscription lifetime is tied to the RPC client connection. If the connection is lost and reconnected, the subscription will be lost. You will have to resubscribe for notifications against the node.
The best way to handle this is to listen to RPC events such as connect
and subscribe to node notifications from within this handler.
Protocols
The Rusty Kaspa p2p node supports two protocols, described below:
gRPC
gRPC is an open-source universal RPC framework. It is designed to be efficient and scalable, with support for multiple programming languages.
Foro the list of gRPC clients please take a look at https://github.com/grpc-ecosystem/awesome-grpc
Benefits
- Wide support across multiple languages and development environments.
- Efficient due to protobuf serialization, which gRPC relies on.
Drawbacks
- Lack of well-established support for protocol routing infrastructure.
- Rusty Kaspa uses a streaming version of gRPC, which is incompatible with some existing gRPC routing infrastructures.
- While gRPC is intended to be high-performance due to protobuf serialization, much of the data in Rusty Kaspa is passed as hex or string-serialized data, which reduces efficiency. However, this can make integration easier with client-side infrastructure.
- No support in the Rusty Kaspa WASM SDK.
wRPC
wRPC is a proprietary RPC layer developed in Rust by the members of the Rusty Kaspa development team. It is designed for high efficiency, multi-platform compatibility and uses standard WebSockets for transport.
Benefits
- Full integration with Rust and WASM SDKs, including support for application load-balancing.
- Easy to use.
- High performance.
- Supports protocol routing infrastructure (it uses standard WebSockets).
- Supports both binary and JSON serialization.
- The JSON encoding follows a WebSocket-framed JSON-RPC-like protocol, making it usable in third-party applications.
Drawbacks
- Limited support in other languages (currently only supported in Rust and WASM SDKs).
Clients
WASM SDK - wRPC
- RpcClient class - main RPC class for interacting with the node
- Resolver class - class for resolving public nodes against PNN
Examples
Rust SDK - wRPC
- KaspaRpcClient struct - main wRPC struct for interfacing with the node
- RpcApi trait - trait that defines RPC methods
- Resolver struct - struct for resolving public nodes against PNN
Examples
Rust SDK - gRPC
- GrpcClient struct implementation.
gRPC .proto
definitions
gRPC integration provides .proto
files that can be used to generate client code in multiple languages. gRPC .proto
files messages.proto
and rpc.proto
can be found in the Rusty Kaspa repository at https://github.com/kaspanet/rusty-kaspa/tree/master/rpc/grpc/core/proto.
Port Selection
All RPC interfaces are exposed on specific ports that depend on the network type or network ID you are connecting to.
RPC interface ports can be changed when running kaspad
using the following arguments:
--rpclisten=<ip>[:<port>]
for gRPC--rpclisten-borsh=<ip>[:<port>]
for wRPC with Borsh encoding--rpclisten-json=<ip>[:<port>]
for wRPC with JSON encoding
For local interface binding, you can specify 127.0.0.1
or localhost
as the IP address. For public interface binding, you can specify 0.0.0.0
or the specific IP address you want to bind to.
NOTE: Rusty Kaspa does not have a specific port for the Testnet
network type. The 1*210
port is used for all testnet networks. However, when running two testnet nodes on the same machine, it is customary to use 16210
for Testnet-10
and 16310
for Testnet-11
. As such, these ports are listed as defaults. However, if you simply pass the --testnet
flag to kaspad
, it will assign the default testnet port of 1*210
regardless of the testnet network ID.
Default gRPC Ports
Mainnet
:16110
Testnet-10
:16210
Testnet-11
:16310
Simnet
:16510
Devnet
:16610
wRPC Port Handling
wRPC uses URLs when specifying connection endpoints. The wRPC client performs auto-detection of ports when validating the supplied endpoint URL as follows:
- If the URL is fully formed, containing a protocol scheme (
ws://
orwss://
) or a path, the port is not specified unless manually supplied as part of the URL. - If the URL does not contain a protocol scheme, the default port and protocol scheme are assigned based on the network type.
The default protocol scheme assignment is based on the execution environment. If running the RPC client in a browser accessing an HTTPS endpoint, the protocol will be forced to wss://
. If running the RPC client in a Node.js environment or a browser via an HTTP endpoint, the protocol will be set to ws://
. (A page located at an HTTPS endpoint cannot open unsecure WebSocket connections to ws://
endpoints due to security restrictions.)
Default wRPC Borsh Encoding Ports
Mainnet
:17110
Testnet-10
:17210
Testnet-11
:17310
Simnet
:17510
Devnet
:17610
Default wRPC JSON Encoding Ports
Mainnet
:18110
Testnet-10
:18210
Testnet-11
:18310
Simnet
:18510
Devnet
:18610
Checking RPC availability
There are two simple ways to check if the RPC server is running and listening on a port: Rusty Kaspa CLI Wallet and Kaspa NG desktop version.
Using Rusty Kaspa CLI Wallet
Rusty Kaspa CLI Wallet is available as a part of official Rusty Kaspa releases.
Simply download the latest release, extract the archive and run kaspa-wallet
from the command line.
Once started you can use following commands to check if the RPC server is running and listening on a port:
network mainnet
connect localhost
rpc get-info
If you are building from source, you can run cargo run --release
from the /cli
folder in the Rusty Kaspa repository.
Using Kaspa NG
- Go to the
Settings
panel. - Navigate to
Kaspa p2p Network & Node Connection
- Make sure the
Kaspa Network
is set to the same network running on your node. - Make sure
Kaspa Node
is set toRemote
- Under
Remote p2p Node Configuration
selectCustom
- In
wRPC ConnectionSettings
select the protocol and the wRPC URL. - Hit
Apply
If connecting to default ports, you can simply enter localhost
.
IMPORTANT
You can not use Kaspa NG online version at https://kaspa-ng.org to connect to your local RPC server. Web browser security restrictions only allow the online Kaspa NG version to connect to Kaspa nodes running on SSL endpoints, while the local RPC server does not use SSL (please see how to configure a proxy server for SSL endpoints in the proxies section). The desktop version of Kaspa NG does not have this restriction.
Public Node Network
Kaspa Public Node Network is a contributor-driven initiative to provide a public nodes that are available for use by the community. The network is managed by the Kaspa Resolver load-balancer that monitors public nodes and can be queried for a node that has least number of active client connections.
This infrastructure is primarily used for development and testing purposes. It is not recommended to use the public node network for large-scale production applications or exchanges, albeit many web wallets do as web wallets are not capable of running their own nodes.
PNN is fully decentralized - nodes are operated by independent PNN contributors. A Telegram channel and Discord are used to coordinate updates and maintenance activities among the contributors.
If you are interested in contributing to the PNN, please reach out developers on the Discord #development channel. Running a node within PNN requires a stable internet connection and a dedicated VPS server. All node deployments are standardized to run on Debian-compatible Linux distribution (Debian / Ubuntu) and deployed using kHOST deployment automation tool.
The status of PNN nodes can be viewed at the PNN Status Page.
Kaspa Resolver
Kaspa Resolver is an ALB (application load balancer) that can connect to a set of Rusty Kaspa p2p nodes and and then be queried via a REST API endpoint for the node with least active client connections.
Kaspa Resolver is the backbone of the Public Node Network (PNN) and is used to balance the load between the public nodes. Kaspa Resolver can also be used to create private node clusters.
Kaspa Resolver API is integrated directly within Rust and WASM SDKs, specifically within the RpcClient
class, where instead of passing the endpoint URL, you can pass the Resolver
object that will be used to obtain the node endpoint each time the client connects. If the node becomes unavailable, the client will automatically switch to the next best available node.
GitHub repository: https://github.com/aspectron/kaspa-resolver
kHOST
kHOST is a deployment automation tool that is used to deploy and manage Kaspa nodes. It is used by the Kaspa Public Node Network (PNN) contributors to deploy and manage their nodes.
There is nothing special about this project other than this is a Rust-based deployment automation tool and is meant to simplify node deployment and updates.
GitHub repository: https://github.com/aspectron/khost
HTTP Proxies
HTTP proxies can be used to route traffic to the Kaspa p2p nodes. This is useful when you want to expose the p2p node to the public internet, but you don't want to expose the node directly. This can be done by using an HTTP proxy like NGINX or HAProxy.
Proxies also allow you to map multiple nodes to different subdomains or paths. This can be useful when you have multiple networks running on the same machine.
NGINX
Here is an example of an NGINX configuration that routes traffic to different Kaspa p2p nodes based on the path:
server {
listen 80;
listen [::]:80;
# Replace example.com with your domain
server_name *.example.com;
client_max_body_size 1m;
# Kaspa p2p node (kaspa-mainnet)
location /kaspa/mainnet/wrpc/borsh {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:17110/;
}
# Kaspa p2p node (kaspa-mainnet)
location /kaspa/mainnet/wrpc/json {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:18110/;
}
# Kaspa p2p node (kaspa-testnet-10)
location /kaspa/testnet-10/wrpc/borsh {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:17210/;
}
# Kaspa p2p node (kaspa-testnet-10)
location /kaspa/testnet-10/wrpc/json {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:18210/;
}
# Kaspa p2p node (kaspa-testnet-11)
location /kaspa/testnet-11/wrpc/borsh {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:17310/;
}
# Kaspa p2p node (kaspa-testnet-11)
location /kaspa/testnet-11/wrpc/json {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:18310/;
}
}
Note on use of NGINX
The default NGINX configuration allows for 768 simultaneous connections per CPU core. If you need to increase this limit, you can do so by modifying the worker_connections
directive in the nginx.conf
file.
Transactions
Kaspa transactions are similar to that of Bitcoin, but there are some differences on how Kaspa calculates transaction constraints.
The following section covers transaction constraints such as compute and storage mass limits and how to calculate fees based on these constraints.
Transaction Fees
Kaspa transaction fees are determined by the cumulative difference between the input and output amounts, i.e., fees = sum(inputs) - sum(outputs)
.
Kaspa transaction fees are based on transaction mass, which is measured in grams.
- Each transaction has a mass limit of
100,000 grams
(imposed by the mempool standard). - Each block has a mass limit of
500,000 grams
(imposed by consensus rules).
Mass Components
The following elements must be considered during transaction fee calculations:
Network Mass
Network mass is the larger value between compute mass and storage mass, calculated as:
network_mass = max(compute_mass, storage_mass)
Network mass is governed by the KIP-0009 standard, with the basic principles explained in the Storage Mass section.
The implementation of network mass calculations can be found in the kaspa_consensus_core::mass
module.
Signed vs Unsigned Transactions
When calculating transaction fees, it is important to distinguish between signed and unsigned transactions. The node expects all mass calculations to be performed on signed transactions because the signature is part of the transaction data, contributing to the compute mass.
Both ECDSA and Schnorr signatures are 64 bytes in size. Therefore, when calculating mass for unsigned transactions, functions pad the calculated mass by 64 bytes to account for the signature that will be added upon signing.
WASM SDK helper functions, such as calculateTransactionMass()
, explicitly expect unsigned transactions.
Compute Mass
Compute mass is derived from the overall transaction byte size as well as the number of SigOps required by the transaction script.
The implementation of compute mass calculation can be found in the kaspa_consensus_core::mass
module.
Storage Mass
Storage mass is calculated using the KIP-0009 algorithm, which takes into account the number of inputs, outputs, and their values to ensure that transactions fall within certain constraints. If a transaction exceeds these constraints, the mass will rapidly increase, resulting in the transaction being rejected by the network.
The constraints imposed by KIP-0009 can be simplified as follows:
-
You cannot create transactions that significantly "fan out" small amounts of KAS into many outputs. For example, if you have one input and try to create many small outputs, the transaction mass will increase quickly, eventually reaching a limit that will cause the transaction to be rejected by the network. The general rule is that the number of outputs should not exceed 10 times the number of inputs. For instance, a transaction with 1 input and 10 outputs (1->10) will be rejected, while a transaction with 5 inputs and 10 outputs (5->10) will be accepted. The following logic also applies:
- If all outputs are greater than
100 KAS
, there is effectively no limit on the number of outputs you can have. - If all outputs are greater than
10 KAS
, you can have up to approximately100
outputs before the transaction mass reaches100k grams
. - If all outputs are greater than
1 KAS
, you are limited to10
outputs.
- If all outputs are greater than
-
You are allowed to combine or transfer inputs without increasing the transaction mass, as long as the number of outputs is equal to or fewer than the number of inputs (e.g., 10->10 or 10->1 transactions are allowed).
-
The value of each output must not deviate too much from the input amounts, as explained in the Output Deviation section.
Output Deviation
Output deviation refers to an output value that is significantly smaller than the input.
This constraint can be simplified as follows:
- The KAS amount of any output must not be less than 0.019 KAS (0.02 KAS can be used as a safe cut-off point). Having any output below this amount will result in rejection.
However, while this constraint can be used as a simple general rule for transaction creation, the actual relationship between input and output values allows for some flexibility. For example, you can create two outputs of 0.001
and 0.003
if your input is 0.003
. Additionally, an output of 0.003
can be created if you have an input of 0.007
(you can break down 0.007
into 0.004 + 0.003
), etc.
Please see the Change Outputs section for more information on handling change outputs.
Change Outputs
When creating transactions, it is important to observe the resulting amount of the change output. As a general rule, if your change output amount goes below 0.02, the output should be removed and the amount should be sacrificed to fees.
You can most certainly add additional inputs in an effort to raise the change output amount, but that will result in a slight compute mass increase and in general UTXO selection algorithm can become rather complex.
Fee Rate & QoS
Fee Rate
The transaction mass determines the mandatory network fees, which are translated into KAS using the "Fee Rate" multiplier. A 1:1
translation (i.e., a fee rate
of 1.0
) represents the minimum mandatory fees required by the network to process a transaction. Thus, the network's minimum accepted fee rate is 1.0
. A fee rate higher than 1.0
gives the transaction priority over others.
For example, a transaction with a mass of 2,000 grams
requires a minimum fee of 2,000 SOMPI
(with a fee rate
of 1.0
). If the fee rate is increased to 2.0
, the fee becomes 4,000 SOMPI
, where 2,000 SOMPI
are the mandatory fees, and the additional 2,000 SOMPI
represent the priority (QoS) fee.
When selecting transactions from the mempool for mining, they are ordered by fee rate, with higher fee rate transactions being prioritized. (For more information, see Miner Selection).
Estimating Fee Rate
Rusty Kaspa node provides a getFeeEstimate()
RPC method. This method performs real-time analysis of the mempool and suggests fee rate values based on the current load.
The returned estimates are grouped into three sections—low, normal, and priority—and provide a fee rate along with a time estimate for transaction confirmation. By using one of these values, or interpolating between them, you can significantly increase the chances of your transaction being confirmed within the desired time frame.
NOTE: Fee rate estimates are best-effort analytical guesses. Sudden spikes in transaction volume are unpredictable, so there is always a chance that your transaction might take longer to be confirmed. However, this is a rare occurrence, and it is more likely that transaction pressure will decrease, allowing your transaction to be confirmed sooner than expected.
Estimating with Transaction Generator
When creating transactions using the Transaction Generator, you can specify the feeRate
(or fee_rate
in Rust) setting to apply the desired fee rate to the transactions. If no fee rate is specified, the default value of 1.0
is used.
If you want to allocate a specific fixed amount of KAS for priority fees, follow these steps:
- Run the transaction generator with a fee rate set to
1.0
. The generator will produce aGeneratorSettings
object containing the total mass used by the generated transactions. - Divide the priority fee amount by the total mass to calculate the fee rate multiplier for the priority.
- Add
1.0
to this multiplier to get the final fee rate.
Example
If you want to allocate 5 KAS for priority fees, and the total mass for your transaction is 2,043 grams
, the mandatory fee would be 2,043 SOMPI
. To allocate 5 KAS for priority, divide 5 KAS
by 2,043 SOMPI
, resulting in 0.002448
. Adding 1.0
gives a fee rate of 1.002448
.
When you pass this fee rate (1.002448
) to the Transaction Generator, it will create transactions with the intended priority fee, increasing the total transaction fee by approximately 5 KAS
.
Fee Estimate Data
The fee estimator is an integrated component in the Rusty Kaspa p2p node that performs real-time analysis of the mempool and suggests fee rate values based on the current load. The fee estimator returns a RpcFeeEstimate
object, which contains a set of RpcFeerateBucket
objects. Each RpcFeerateBucket
contains fee_rate
and estimated_seconds
fields.
The buckets are divided into three categories:
-
Priority Bucket: The top-priority feerate bucket provides an estimate of the fee rate required for sub-second DAG inclusion. For all buckets, feerate values are expressed in
sompi/gram
units. The required fee can be calculated by multiplying the transaction mass by the feerate:fee = feerate * mass(tx)
. -
Normal Buckets (
Vec<RpcFeerateBucket>
): A vector of normal priority feerate values. The first value guarantees an estimate for sub-minute DAG inclusion. Other values in the vector have shorter estimation times compared tolow_bucket
values. You can interpolate between the priority, normal, and low buckets to compose a complete feerate function on the client side. The API makes an effort to sample enough key points on the feerate-to-time curve for meaningful interpolation. -
Low Buckets (
Vec<RpcFeerateBucket>
): A vector of low priority feerate values. The first value guarantees an estimate for sub-hour DAG inclusion.
Most wallets will use the priority_bucket
and the first values of normal_buckets
and low_buckets
to offer users three choices: priority
, normal
, and low
fee rates.
References
- Rust SDK
get_fee_estimate_call
RPC method - Rust SDK
RpcFeeEstimate
struct returned inside theGetFeeEstimateResponse
- WASM SDK
RpcClient.getFeeEstimate()
method - WASM SDK
IFeeEstimate
interface
Batch Transactions
Transaction mass limits can result in the following problem:
A standard transfer transaction can typically carry 83
inputs and 2
outputs. If you have 1_000
UTXOs with a value of 1 KAS
each and want to transfer 99 KAS
(with 1 KAS
reserved for fees), you will not be able to create such a transaction due to the mass limit.
There are two ways to handle this problem:
- Create a set of compound transactions where all KAS is transferred to the user's address (thus "compounding" them) and then use the newly created UTXOs to transfer out.
- Create a set of "chained" (a.k.a. "batch") transactions where you have "in-flight" compounding by submitting transactions that reference each other sequentially to the node.
Chaining Transactions
The Kaspa p2p node will accept transactions that contain inputs from the UTXO index or inputs of transactions currently present in the mempool. As such, you can create transactions that reference each other within your application and then submit them sequentially (or in the reference order).
When referencing a previously created transaction that has not yet been submitted, you need to create a "virtual UTXO" that uses the previous transaction as a previous transaction outpoint (txid + outpoint index) and specifies the DAA score of the UTXO to be u64::MAX
.
The above example can be handled in the following manner:
- Create the 1st transaction containing
83
inputs and1
output (typically to a wallet's change address). - Create the 2nd transaction, consuming the output of the first transaction (
~83 KAS
), and then add the remaining17
inputs. The 2nd transaction can now have2
outputs (the outbound transfer and, if needed, a change output).
Transaction Parallelism
Transactions that reference each other cannot be included in the same mined block. As such, the above example will be mined across two distinct DAA scores, taking at least 1 DAA score per transaction.
However, if you need to compound UTXOs requiring more than two transactions, you can compound them by creating and submitting 2
transactions in parallel and then submitting a final transaction that unifies them.
Example:
80 -> A
80 -> B
A + B -> C
In this example, A
and B
will be processed in parallel, and C
will join them at the final stage, resulting in a processing duration of 2 DAA
scores.
Miner Selection
When transactions are selected from the mempool for block inclusion, a noise function is applied to the selection process to introduce slight randomness. This is an integral part of the transaction selection process, as blocks produced by different miners can carry the same transactions. By randomizing the selection, there is a higher probability of different transactions being included in mined blocks.
The number of unique transactions included in concurrently mined blocks is known as "Effective TPS" (Effective Transactions Per Second).
Wallets
Overview
The Kaspa p2p node is a high-performance data processing system. Due to the significant volume of data processed by the network, the Kaspa p2p node uses a process called "pruning." The node retains all network data for 3 days and discards it afterward. The only data permanently retained is the DAG-related data required for cryptographic proof of network continuity, along with an optional UTXO index database that stores the current state of all network UTXOs.
Given these constraints, there are several ways to create wallet systems that operate with the Kaspa network:
Additionally, there are third-party solutions for interfacing with the Kaspa network, such as:
3rd Party APIs
While third-party APIs are developed by community contributors in close collaboration with Kaspa core developers, they are not an integral part of the Kaspa network. This means they may experience breaking changes or downtime independent of the Kaspa p2p network. As such, relying on third-party APIs poses additional risks to your infrastructure in terms of downtime and stability.
If these concerns are not critical to your project, third-party APIs can be an easier option, especially if your development environment cannot easily interact with Node RPC endpoints. However, for a more resilient infrastructure with high uptime guarantees, it is recommended to use the Rust SDK or WASM SDK, as they are built directly from the Rusty Kaspa framework.
UTXO Processing
One of the key differences between Kaspa and other cryptocurrency networks is that Kaspa operates directly on UTXOs. As a result, all wallet-related functionality interacts with UTXO updates, not transactions. For example, if a single transaction sends two outputs to your wallet, you will receive two separate UTXO notifications. In general, the usage pattern is that you send transactions but your receive UTXOs. The client-side Wallet SDK simplifies this by performing additional client-side processing, grouping incoming UTXO notifications into transaction-like objects.
IMPORTANT
To ensure optimal performance when working with UTXOs, it is advisable to reduce the number of UTXOs by compounding them. This can be achieved by creating Batch Transactions to the corresponding account's change address. Each UTXO loaded by the wallet consumes memory. In JavaScript runtime environments, such as web browsers and Node.js, there is typically a memory limitation of 2 GB. While native applications utilizing Rust do not face this limitation, a very high number of UTXOs can still affect RPC and wallet processing performance, leading to slower processing times.
It is highly recommended that if a single address (or account) contains more than 1 million UTXOs, they should be compounded via batch transactions to optimize performance.
Primitives
The Rusty Kaspa SDK provides a standard set of primitives required to build HD wallets. These primitives include:
- Mnemonics – Mnemonic processing and private key seed generation following the
BIP-39
standard. - Key Management – Cryptographic primitives for managing private and public keys using the
secp256k1
elliptic curve. - Derivations – Key derivation functionality using
BIP-32
andBIP-44
standards. - Addresses – Kaspa address handling and validation.
The references in this section are not exhaustive but are intended to provide a starting point for developers to understand the basic building blocks of Rusty Kaspa wallets. A great way to dive deeper is by cloning the Rusty Kaspa repository and using "find in files" to search for terms of interest.
Mnemonics
Rusty Kaspa SDKs implement standard BIP-39 mnemonics for generating private key seeds. The implementation supports 12 and 24 word mnemonics with an optional passphrase (known as 13th or 25th word).
Examples
Key Management
Rusty Kaspa uses secp256k1 elliptic curve cryptography library in conjunction with a customized kaspa-bip32 crate that was forked from bip32 crate in order to extend it's serialization and WASM capabilities.
Examples
WASM SDK examples of use can be found within other examples in the SDK repository:
Derivations
The Standard Address derivation path for Kaspa is m/44'/111111'/0'
(with 12 or 24 word BIP-39 seed phrases).
IMPORTANT: Please note, legacy wallets such as wallet.kaspanet.io
and KDX
use a different derivation path and a non-standard address derivation scheme. These wallets are not BIP-32 compatible and are deprecated. Rust and WASM SDKs do not support these wallets and integrators should not attempt to integrate with them. These wallets are being phased out and only a few applications will support their wallet imports, while advising users to migrate to accounts based on the standard derivation path.
Implementation
There are two methods for key derivation in Rusty Kaspa:
PrivateKeyGenerator
andPublicKeyGenerator
helpers.DerivationPath
that can be used to create custom derivation paths.
WASM SDK
- PrivateKeyGenerator and PublicKeyGenerator classes in the WASM SDK that provide key derivation functionality based on the Kaspa derivation path standard.
- DerivationPath class in the WASM SDK that provides a way to create and manage derivation paths.
Rust SDK
- The functionality for key derivation is provided in the
kaspa-wallet-keys
crate.
Examples
Addresses
Address Format
A kaspa address contains 3 components:
- Network Type
- Public Key Version (type)
- Public Key
The string representation of a kaspa address contains the network type and a bech32
encoded public key suffixed with a checksum.
Example of a network address:
Mainnet
kaspa:qpauqsvk7yf9unexwmxsnmg547mhyga37csh0kj53q6xxgl24ydxjsgzthw5j
Testnet
kaspatest:qqnapngv3zxp305qf06w6hpzmyxtx2r99jjhs04lu980xdyd2ulwwmx9evrfz
References
Address
class in WASM SDK- WASM SDK JavaScript example
- WASM SDK TypeScript example
- Rust source code (implementation)
- Rust SDK
kaspa-addresses
crate
Direct Node RPC
The Wallet SDK and Wallet API are built using the direct node RPC. You can use direct node RPC to interface directly with the Kaspa network, however doing so requires additional logic and processing which can become taxing for the developer.
There are two primary API methods that are used with interfacing with the node to obtain transaction information:
getUtxosByAddresses()
method - provides a list of UTXOs for a specific list of addressessubscribeUtxosChanges()
event subscription - given a set of Kaspa addresses, you will receive transaction notifications against UTXOs affecting these addresses.
NOTE: When using these two methods, you should subscribe for notifications first and then call getUtxosByAddresses()
- this sequence ensures that you will not miss any notifications while processing an existing set of UTXOs.
When using RpcClient
directly, in addition to subscribing to events, you must also install your own event listener callback using addEventListener
method. This callback will be called for any new subscription.
Subscribing for Event Notifications
IMPORTANT
There are two type of subscriptions - local subscription to the RPC subsystem and remote subscription to the Kaspa node. If the RPC client disconnects and reconnects, your local subscription (event listener registration or Rust channel receiver handling) will remain intact, however, the remote subscription will be lost. You will have to resubscribe for notifications against the node.
The best way to handle this is to listen to RPC events such as connect
and subscribe to node notifications from within this handler.
WASM SDK
For each instance of the RpcClient
, you must register an event listener once, but you must subscribe for node notifications each time you connect to the node - on connection, you have to inform the node that you are interested in specific events so that the node can start posting these events to your RpcClient
instance.
addEventListener
can be used with a single callback to receive all notifications or with an event name to receive only specific notifications.
// consume all events
rpc.addEventListener((event) => {
console.log(event);
});
// consume only utxos-changes events
rpc.addEventListener("utxos-changes", (event) => {
console.log(event);
});
Rust SDK (wRPC only)
In the Rust SDK, event notifications are provided via async channels, that transport the Notification enum. A clone of the notification channel can be obtained by calling KaspaRpcClient::notification_channel_receiver()
.
Event notification handling in the Rust SDK is somewhat more complex due to the multi-layered nature of the RPC stack. There are three layers involved in handling event notifications:
- Invoking a subscription request: This triggers the node to start posting events.
- Activating an internal notification listener: This is done via the
register_notification_listener
method, which is part of the core RPC notification subsystem. - Consuming notifications: This is done via the receiver channel obtained from the
KaspaRpcClient::notification_channel_receiver()
method.
This layering exists because the RPC notification subsystem is integral to the Kaspa Consensus processor (used internally by the Rusty Kaspa p2p node) and the same primitives are also utilized in the client-side instance.
RPC Connection and Disconnection Events
Note that RPC connection and disconnection events are managed via side channels known as RpcCtl
channels, which are separate from the notification subsystem. A clone of the RpcCtl
channel can be obtained by calling the KaspaRpcClient::ctl_multiplexer()
method and creating a new receiver channel from it. The Multiplexer is a broadcast channel (MPMC) designed to handle multiple consumers.
#![allow(unused)] fn main() { // Create a new receiver channel bound to the RpcCtl multiplexer let ctl_channel = rpc.clt_multiplexer().channel(); }
For a detailed example of how notifications are handled in the Rust SDK, refer to the Rust wRPC subscriber example.
Wallet SDK
Wallet SDK provides a set of primitives built in Rust that help the client to process wallet-related events. Wallet SDK serves as a foundation for the Wallet API (integrated wallet) and provides all the necessary tools to develop a custom Kaspa wallet solution.
There are three key components in the Wallet SDK that operate in tandem to provide functionality to monitor specific addresses on the network for transactions against them, create transactions and submit them to the network.
Note that the Wallet SDK infrastructure employs an event-based architecture.
UtxoProcessor
UtxoProcessor
is a singleton representing the wallet interface. This can be viewed as a wallet - it connects to the node and ensures that all internal processing is handled correctly. UtxoProcessor also provides an event processing interface where you can register for wallet-related event notifications.
UtxoContext
UtxoContext
interface represents a wallet account. It monitors any given subset of addresses and emits wallet-related events via the associated UtxoProcessor. On creation, UtxoContext can be provided with an id
(a.k.a. Account id). This id
is posted with each UtxoContext-related event allowing you to distinguish different accounts in a multi-account subsystem.
Transaction Generator
The Transaction Generator
interface is designed to create transactions using either a UtxoContext
or a manually supplied set of UTXOs. The Transaction Generator
is a helper class that simplifies the transaction creation process and handles various edge cases that can arise during transaction generation.
The Transaction Generator functions as an iterator, producing PendingTransaction
objects. These transactions can either be aggregated or submitted to the network. Additionally, the generator produces GeneratorSummary
objects, which provide an overview of the entire transaction creation process.
Infrastructure
Overview
The Wallet SDK components are based on an event-driven architecture, meaning your application should listen to events to update its state.
The recommended approach for building wallet infrastructure with the Wallet SDK is to create your own primitives that represent wallet components. These primitives should encapsulate the UtxoProcessor
and UtxoContext
and be updated in response to events.
For example:
- You can create a
Wallet
class that encapsulates theUtxoProcessor
. - You can create an
Account
class that encapsulates theUtxoContext
.
The Wallet
class can listen to events from the UtxoProcessor
and update the corresponding Account
instances accordingly.
Data Storage
When using the Wallet SDK, you need to provide your own storage backend to manage and track wallet data, including:
- Wallet Keys
- Wallet Derivation Data (BIP44 account index assignments)
- Transaction History
Additionally, you must implement logic to capture transaction timestamps and perform additional transaction processing on startup, as outlined in the Discovery Events section.
UtxoProcessor
UtxoProcessor
is the main coordinator that manages UTXO processing between multiple UtxoContext
instances. It acts as a bridge between the Kaspa node RPC connection, address subscriptions and UtxoContext
instances.
UtxoContext
UtxoContext
allows you to track address activity on the Kaspa network. When an address is registered with UtxoContext
, it aggregates all UTXO entries for that address and emits events whenever any activity occurs on these addresses.
The UtxoContext
constructor accepts the IUtxoContextArgs
interface, which can optionally include an id
parameter. If supplied, this id
will be included in all notifications emitted by the UtxoContext
and in the ITransactionRecord
object when transactions occur. If not provided, a random id will be generated. This id
typically represents an account id
in the context of a wallet application.
UtxoContext
maintains a real-time cumulative balance for all addresses registered with it and emits balance update notification events
when the balance changes.
UtxoContext
can also be supplied as a UTXO source for the transaction Generator
, allowing the Generator
to create transactions using the UTXO entries it manages.
IMPORTANT
UtxoContext
is intended to represent a single account. It is not designed to serve as a global UTXO manager for all addresses in a large wallet (such as an exchange wallet). For such use cases, it is recommended to perform manual UTXO management, as described in the Direct Node RPC section.
NOTE TO EXCHANGES
If you are building an exchange wallet, it is recommended to use UtxoContext
for each user account. This allows you to track and isolate each user's activity (address set, balances, transaction records).
References
WASM SDK
Rust SDK
Generator
The Transaction Generator
simplifies the transaction creation process by:
- Dynamically calculating transaction mass.
- Accounting for custom transaction Fee Rates.
- Automatically creating daisy-chained (batch/sweep) transactions if your UTXO set exceeds the maximum mass allowed in a single transaction.
- Calculating the correct transaction change output.
The Transaction Generator functions as an iterator, producing a PendingTransaction
object that can be submitted to the Kaspa network. A PendingTransaction
wraps a regular Transaction
and provides additional metadata, such as:
- A list of UTXO entries (required for transaction signing).
- A reference to the originating
UtxoContext
(useful for distinguishing regular outbound transactions from inter-account transfers).
Upon completion, the Generator::summary()
function can be used to generate a GeneratorSummary
object. This summary provides cumulative information about the transaction generation process, including the total transaction mass, which can be useful for calculating transaction fees.
References
WASM SDK
- Generator class.
- Example of use.
Rust SDK
Pending Transaction
PendingTransaction
is a wrapper around a regular Transaction
that provides additional metadata, such as:
- The list of UTXO entries (required for transaction signing).
- A reference to the originating
UtxoContext
(used for distinguishing regular outbound transactions from inter-account transfers).
References
WASM SDK
- PendingTransaction class
Rust SDK
- PendingTransaction struct
Generator Summary
GeneratorSummary
is class containing a summary produced by the transaction Generator. This class contains the number of transactions and the aggregated transaction mass.
References
WASM SDK
- GeneratorSummary class
Rust SDK
- GeneratorSummary struct
Account IDs
As mentioned in the UtxoContext overview, when creating an instance of UtxoContext
, you should assign it an id
. This id
is posted with each UtxoContext
-related event, allowing you to distinguish different accounts in a multi-account wallet.
Account IDs must be represented by a hash (a 64-character hex string representing a 32-byte hash). The hash can be generated using the sha256FromText
or sha256FromBinary
functions (as well as their sha256d
counterparts).
This requirement is imposed by internal Wallet SDK data types that use 32-byte hashes to track UtxoContext
IDs.
Event-based architecture
The wallet architecture for both the wallet implementation and the underlying UtxoProcessor is event-driven. This means you are meant to affect the wallet subsystem by registering addresses for monitoring and submitting transactions and the wallet will post appropriate transaction-related notification events and balance updates.
Wallet SDK Events
UtxoProcessor.addEventListener()
function provides a way to register event listeners that receive events posted by the wallet. These include standard events such as RPC connection and disconnection events as well as transaction-related events.
The full list of events posted by UtxoProcessor can be found here:
Transaction Events
Transaction events are posted when a transaction-related activity occurs on the network and is related to your monitored address set or when transactions are submitted via the Wallet SDK.
All transaction-related event data contains an instance of the TransactionRecord
. The data
field in the transaction record has the type ITransactionData
, which changes depending on the type of the transaction event.
Transaction events exist in the following variants:
maturity
- The transaction has reached its maturity (the minimum required number of confirmations) and can be considered valid for use.pending
- The transaction has been detected on the network and mined into a block but is not yet considered valid for use.stasis
- This event is emitted only if a coinbase transaction is detected. Transactions identified instasis
mode should not be accounted for or communicated to the end user. Stasis transactions may undergo multiple reorg changes; as such, the information in thestasis
event can be used to track mined transactions and UTXOs, but the wallet balance should not be updated. Eventually, the transaction migrates from thestasis
to thepending
state, at which point it should be processed by the wallet. (You can think ofstasis
as debug information for miners.)reorg
- This event indicates that a UTXO has been invalidated due to a network reorg event. This event can generally be ignored, as the wallet will appropriately update its UTXO set in reaction to such an event.discovery
- A discovery event is posted when the wallet starts up and enumerates existing UTXOs that belong to the addresses registered by the client. Please see the Discovery Events section for more information on handling transaction discovery.
NOTE: Durations used to measure transaction maturity can be configured using the UtxoProcessor::setCoinbaseTransactionMaturityDAA()
and UtxoProcessor::setUserTransactionMaturityDAA()
functions. However, while user transaction maturity is configurable and can be considered valid if mined into a block, coinbase transaction maturity must meet the minimum DAA score of the network coinbase maturity (which is 100 DAA at 1 BPS and 1,000 DAA at 10 BPS).
Transaction Data Variants
While the name of the event denotes the state of the transaction, the TransactionRecord.data
field denotes the type of the transaction and the data it contains. In the WASM SDK, the data
field will contain two subfields: type
, which will contain the kebab-case name of the data type, and data
, which will contain the associated transaction data variant as follows:
ITransactionDataReorg
- Reorg data containing UTXOs that were removed.ITransactionDataIncoming
- Incoming transaction data.ITransactionDataStasis
- Coinbase transaction data.ITransactionDataExternal
- Unknown outgoing transaction* (see below).ITransactionDataOutgoing
- Standard outgoing transaction.ITransactionDataBatch
- Batch transaction data*.ITransactionDataTransferIncoming
- Incoming transfer* (from another UtxoContext).ITransactionDataTransferOutgoing
- Outgoing transfer* (to another UtxoContext).ITransactionDataChange
- Change input from the outgoing transaction*.
While most of these variants are self-explanatory, the following variants need additional explanation:
ITransactionDataExternal
- Indicates that a UTXO has been removed from the monitored set. This can occur only when a transaction is issued from another wallet based on the same private key set. For example, if you import the wallet mnemonic/private key into "Wallet B" and issue a transaction from it (not initiated from "Wallet A"), "Wallet A" will receive such an event.ITransactionDataBatch
- Indicates that the transaction being processed is a batch transaction. Batch transactions compound UTXOs into a single change output that is then used as a source for the final transaction to the destination.ITransactionDataTransferIncoming
- Posted whenUtxoContext
detects a transfer from anotherUtxoContext
attached to the sameUtxoProcessor
.ITransactionDataTransferOutgoing
- Posted whenUtxoContext
makes a transfer to anotherUtxoContext
attached to the sameUtxoProcessor
.ITransactionDataChange
- Posted when an outgoing transaction is created that contains change. Generally, change transactions do not need to be recorded in the wallet history, as they are the same as the outgoing transaction.
Rust Documentation
Discovery Events
When registering a new address with UtxoContext
, the UtxoContext
performs a scan for any UTXOs that belong to this address. This process is known as Transaction Discovery.
UTXOs detected during Transaction Discovery may represent UTXOs previously seen by the wallet or UTXOs that arrived during the wallet's downtime. When receiving the discovery
event, the client should take the transaction ID and check the transaction history to see if a transaction with that ID exists. If it does, the wallet can disregard the transaction (this means the wallet has seen it before); if it doesn't, the wallet should treat this UTXO as new (store it in the database and inform the wallet/account user of a new transaction).
Under the Hood
When receiving new addresses for monitoring, UtxoContext
registers for transaction event notifications against these addresses using subscribeUtxosChanged()
, and then performs getUtxosByAddresses()
to enumerate the matching UTXOs. All UTXOs detected during this enumeration phase are posted as discovery
events.
External access
IMPORTANT: It is crucial to understand the implications of sharing private keys between multiple wallets. If two wallets share the same private key and one wallet consumes incoming UTXOs to create transactions, the wallet that was offline will not be able to see these transactions once it comes back online, as the UTXOs that came in will have already been consumed by the other wallet. The only way to track this type of history is via Explorer APIs or DAG block aggregation. Therefore, to maintain a consistent transaction history, you should avoid sharing private keys between wallets.
Balance Events
Each time a transaction-related event occurs, the wallet subsystem emits a balance
event (IBalanceEvent
) that contains the current IBalance
of the UtxoContext
.
The UtxoContext
balance consists of the following values:
mature
: The amount of funds available for spending.pending
: The amount of funds that are being received but not yet confirmed.outgoing
: The amount of funds that are being sent but have not yet been accepted by the network.
For more details, refer to the IBalance
interface.
Rust Documentation
Transaction Timestamps
Kaspa transactions do not contain timestamps. Instead, transactions contain the DAA (Difficulty Adjustment Algorithm) score. The DAA score can be translated into an approximate timestamp by using the RpcClient.getDaaScoreTimestampEstimate
RPC method.
This method accepts a list of DAA scores and returns a list of approximate timestamps corresponding to each DAA score.
Processing Timestamps
Transaction-related events contain a TransactionRecord
interface that includes the unixtimeMsec?
property.
This property is always undefined
if the transaction occurs in real time. There is no need to convert the DAA score to a timestamp if you are processing live transactions; you can simply capture the current system time on the client side.
However, if the transaction is of the discovery
type, the unixtimeMsec
property will contain the approximate timestamp of the transaction.
This behavior is subject to change in the future, so it is recommended to always follow these steps:
- If the transaction is not of the
discovery
type, use the current system time. - If the transaction is of the
discovery
type and theunixtimeMsec
property isundefined
, use theRpcClient.getDaaScoreTimestampEstimate
method to get the approximate timestamp for the transaction.
Wallet API
Wallet API is a Rust implementation of the core wallet functionality built on top of the Wallet SDK.
Core wallet implementation include the following features:
- Support for opening multiple wallet files
- BIP39 mnemonic phrase support (with optional passphrase)
- Support for multiple private keys
- Concurrent support for BIP42 multi-account operations and monitoring
- Real-time transaction notifications
- Transfers between accounts
- Wallet file encryption
- Memory erasure of sensitive data
- Transaction history with transaction notes
Supported target platforms:
- Native (Linux, MacOS, Windows)
- NodeJS (via WASM build target)
- Browsers (via WASM build target)
- Browser Extensions (via WASM build target)
Storage backends:
- Native - Filesystem (rust
std::fs
) - NodeJS - Filesystem (using
fs
module) - Browsers -
localStorage
+IndexDB
- Browser Extensions -
chrome.storage.local
Wallets using the Wallet API (such as Kaspa CLI wallet and Kaspa NG) can interoperate and open each-other's wallet files.
The Wallet implementation provides a procedural interface to the wallet creation and management. You can create a wallet file, create any number of accounts in this wallet file, create, import or export private keys, estimate and submit transactions.
Wallet API is used by Kaspa CLI Wallet and Kaspa NG applications.
References
WASM SDK
Rust
- WalletApi trait - Rust trait declaring Wallet API methods.
- WalletAPI method arguments - structs used as arguments and return types for Wallet API methods.
- WalletAPI transports - transport implementation for Wallet API methods. NOTE transport client & server implementations represent data serialization only, not the actual transport protocol.
Contributing
This mdbook for Kaspa is available at https://github.com/aspectron/kaspa-mdbook/.
If you would like to contribute to the content:
cargo install mdbook
git clone https://github.com/aspectron/kaspa-mdbook
cd kaspa-mdbook
mdbook serve
Once started, you can navigate to http://localhost:3000
For Visual Studio Code, in the command palette, you can search for Simple Browser. This will allow for the book preview while it is being edited.
mdbook
works like a wiki - it scans for links and creates corresponding markdown files if missing. To create new pages, just create links to destination files.
When you've made changes, please PR!