您正在查看: EOS-新手教程 分类下的文章

dfuse-eosio 如何添加自定义查询条件,例如transfer中的memo

为了业务的需要的,我们常常需要查询交易中的一些自定义的参数,比如transfer 中的memo
当我们按找REST API文档使用时,

http://39.106.103.152:8080/v0/search/transactions?start_block=0&block_count=327722&limit=10&sort=desc&q=receiver:eosio.token+action:transfer+data.memo:return

会报不支持

"server error: unable to initiate to search: The following fields you are trying to search are not currently indexed: 'data.memo'. Contact our support team for more."

解决方案

被索引的操作的数据字段定义为search-common-indexed-terms默认配置为"receiver, account, action, auth, scheduled, status, notif, input, event, ram.consumed, ram.released, db.key, db.table, data.account, data.active, data.active_key, data.actor, data.amount, data.auth, data.authority, data.bid, data.bidder, data.canceler, data.creator, data.executer, data.from, data.is_active, data.is_priv, data.isproxy, data.issuer, data.level, data.location, data.maximum_supply, data.name, data.newname, data.owner, data.parent, data.payer, data.permission, data.producer, data.producer_key, data.proposal_name, data.proposal_hash, data.proposer, data.proxy, data.public_key, data.producers, data.quant, data.quantity, data.ram_payer, data.receiver, data.requested, data.requirement, data.symbol, data.threshold, data.to, data.transfer, data.voter, data.voter_name, data.weight, data.abi, data.code"

所有不以data.“.”开头的字段都是“固定的”,这意味着您可以将它们包含在列表中以对其进行索引或关闭以禁用对其的索引编制,但是我们不能添加新的字段(block_timestamp例如,将被拒绝)。

所有开始data.都是动态的字段,并且可以是任何字段,如果它们与操作的字段名称匹配,则将对其进行索引,例如,如果您部署了具有操作event_id且data.event_id列表中有的智能合约,则可以按receiver:mycontract data.event_id:something。

现在,大约data.memo可以将其添加到列表中,它将起作用,但是它将是完全匹配的,在memo数据上没有标记化,也没有解释。一个手段memo形式first second third,data.memo:first将不匹配,只有data.memo:'first second third'将匹配字段。

最终修改

编译dfuse.yaml,在flags中添加

search-common-indexed-terms: "receiver, account, action, auth, scheduled, status, notif, input, event, ram.consumed, ram.released, db.key, db.table, data.account, data.active, data.active_key, data.actor, data.amount, data.auth, data.authority, data.bid, data.bidder, data.canceler, data.creator, data.executer, data.from, data.is_active, data.is_priv, data.isproxy, data.issuer, data.level, data.location, data.maximum_supply, data.name, data.newname, data.owner, data.parent, data.payer, data.permission, data.producer, data.producer_key, data.proposal_name, data.proposal_hash, data.proposer, data.proxy, data.public_key, data.producers, data.quant, data.quantity, data.ram_payer, data.receiver, data.requested, data.requirement, data.symbol, data.threshold, data.to, data.transfer, data.voter, data.voter_name, data.weight, data.abi, data.code, data.memo"

原有数据基础上增加data.memo,重新运行dfuseeos 即可

参考

https://github.com/dfuse-io/dfuse-eosio/issues/209

dfuse - database dirty flag set (likely due to unclean shutdown)

问题

由于同步节点由dfuseeos本身管理和运行,因此从测试的角度来看,dfuseeos的稳定性会对同步节点产生影响。如何避免这种关联导致的异常退出?

./dfuseeos start
Starting dfuse for EOSIO with config file './dfuse.yaml' 
Launching applications: abicodec,apiproxy,blockmeta,booter,dashboard,dgraphql,eosq,eosws,merger,mindreader,relayer,search-archive,search-forkresolver,search-indexer,search-live,search-router,statedb,tokenmeta,trxdb-loader 
Your instance should be ready in a few seconds, here some relevant links:

  Dashboard:        http://localhost:8081

  Explorer & APIs:  http://localhost:8080
  GraphiQL:         http://localhost:8080/graphiql

instance stopped, attempting restore from source (operator/operator.go:154) {"source": "snapshot", "command": "nodeos --config-dir=./mindreader --data-dir=/home/surou/Documents/Test_Dfuse/eosio/eos/programs/dfuseeos/dfuse-data/mindreader/data --pause-on-startup"}
<4>warn  2021-01-21T02:43:51.432 nodeos    chain_plugin.cpp:1199         plugin_initialize    ] 13 St13runtime_error: "state" database dirty flag set (log_plugin/to_zap_log_plugin.go:107) 
command terminated with non-zero status (superviser/superviser.go:179) {"status": {"Cmd":"nodeos","PID":4049750,"Exit":2,"Error":{"Stderr":null},"StartTs":1611197031417829539,"StopTs":1611197031434658318,"Runtime":0.016828781,"Stdout":null,"Stderr":null}}
<3>error 2021-01-21T02:43:51.433 nodeos    main.cpp:153                  main                 ] database dirty flag set (likely due to unclean shutdown): replay required (log_plugin/to_zap_log_plugin.go:107) 
cannot find latest snapshot, will replay from blocks.log (superviser/snapshot.go:153) 
restarting node from snapshot, the restart will perform the actual snapshot restoration (operator/operator.go:393) 
Received termination signal, quitting 
Waiting for all apps termination... 
app trxdb-loader triggered clean shutdown 

解决方案

第一条建议是mindreader独立于其余堆栈运行。这将大大减少dfuse-eosio的异常退出(由于其他部分)而影响mindreader操作的可能性,这对于node-manager管理nodeos进程的应用程序也是如此。

下一步是通过拍摄快照和自动恢复来定义良好的恢复策略。即使没有为EOSIO设置dfuse,nodeos也存在不干净关机的风险,例如由于内存不足错误,服务器意外重启以及其他原因。

如果您还没有自动快照获取机制,则本部分中的建议是node-manager在侧面独立运行应用程序。它将包含链的数据和状态的另一个同步副本,也可以用于服务Nodeos RPC API。这个程序负责定期拍摄自动快照。

# Storage bucket with path prefix where state snapshots should be done. Ex: gs://example/snapshots
node-manager-snapshot-store-url: <storage location, local path or supported cloud provider bucket>
# Enables restore from the latest snapshot when `nodeos` is unable to start.
node-manager-auto-restore-source: snaphost
#  If non-zero, a snapshot will be taken every {auto-snapshot-modulo} block.
node-manager-auto-snapshot-modulo: 100000 # Decrease for network with heavier traffic to take snapshot more often and shrink time to catch up from latest snapshot to HEAD
# If non-zero, after a successful snapshot, older snapshots will be deleted to only keep that number of recent snapshots
node-manager-number-of-snapshots-to-keep: 5 # Uses 0 to keep them all, useful for eventually regenerating dfuse merged blocks in parallel (not very likely but possible) 

当这些快照存在时,您现在可以将mindreader应用程序配置为使用它们,以在无法启动该nodeos过程(也几乎可以通过快照还原解决)时自动使用它们进行还原,mindreader会在过去启动并赶上来。所需的添加设置为:

# Storage bucket where `node-manager` wrote its snapshot, must be shared with `mindreader` app.
mindreader-snapshot-store-url: <storage location, local path or supported cloud provider bucket>
# Enables restore from the latest snapshot when `nodeos` is unable to start.
mindreader-auto-restore-source: snaphost

一切都可以在同一台计算机上运行,​​并可以启动不同的进程。例如,它也可以被容器化以在Kubernetes中运行。

另一个选择是使用该mindreader-stdin应用程序。此应用程序与mindreader应用程序类似,但不管理nodeos流程。相反,它nodeos通过stdin管道消耗深层数据,调用看起来像nodes -c | dfuseeos start mindreader-stdin <flag or -c config.yaml file>(可能不是确切的调用,如果需要,可以将您链接到文档)。

转载自:https://github.com/dfuse-io/dfuse-eosio/issues/202

dfuse-eosio安装及使用

编译源代码

下载代码

git clone https://github.com/dfuse-io/dfuse-eosio

安装go

wget https://golang.org/dl/go1.15.6.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin

检查go版本

go version

安装yarn

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list

sudo apt update && sudo apt install yarn

GOPATH

由于最后编译安装用到了go install,所以需要确认下GOPATH已设置

go env

最终dfuseeos会生成在$GOPATH/bin,如果没设置,临时设置下

export GOPATH=/home/当前用户/go/bin
export PATH=$PATH:$GOPATH

安装Go-bindata

go get -u github.com/jteeuwen/go-bindata/...

开始编译

./scripts/build.sh -f -y

安装dfuse定制版本的nodeos

wget https://github.com/dfuse-io/eos/releases/download/v2.0.8-dm-12.0/eosio_2.0.8-dm.12.0-1-ubuntu-18.04_amd64.deb
sudo apt install ./eosio_2.0.8-dm.12.0-1-ubuntu-18.04_amd64.deb

Depends: libicu60 but it is not installable

如果安装时提示此错误,先安装下依赖

echo "deb http://us.archive.ubuntu.com/ubuntu/ bionic main restricted" | sudo tee /etc/apt/sources.list
sudo apt update && sudo apt install libicu60

测试dfuseeos

初始化相关配置

./dfuseeos init

启动测试

./dfuseeos start

could not locate box "dashboard-build"

如果出现此错误,

合约推送Action参数为time_point的序列化处理

将时间转换为微妙(uint64)推送,链上会自动转为time_point类型

使用history-tool 同步链上数据到PostgreSQL

由于官方的工具history-tool,对比mongo插件,history-tool效率更高,支持PostgreSQL/RocksDB。并且避免了因为意外写入导致链程序意外退出后,数据脏的麻烦问题,并且pg会修复微分叉的交易,所以今天试用下history-tool方案

编译 fill-pg

主要参考 https://eosio.github.io/history-tools/build-ubuntu-1804.html

安装环境依赖

安装Clang 8 和其他需要的工具

sudo apt update && sudo apt install -y wget gnupg

cd ~
sudo wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -

sudo vi /etc/apt/sources.list
## 文件尾部添加
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic main
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main

sudo apt update && sudo apt install -y \
    autoconf2.13        \
    build-essential     \
    bzip2               \
    cargo               \
    clang-8             \
    git                 \
    libgmp-dev          \
    libpq-dev           \
    lld-8               \
    lldb-8              \
    ninja-build         \
    nodejs              \
    npm                 \
    pkg-config          \
    postgresql-server-dev-all \
    python2.7-dev       \
    python3-dev         \
    rustc               \
    zlib1g-dev

sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-8 100
sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-8 100

生成并安装Boost 1.70。调整-j10以匹配您的机器。如果您没有足够的RAM用于所使用的内核数量,则会发生不好的事情

cd ~
wget https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.gz
tar xf boost_1_70_0.tar.gz
cd boost_1_70_0
./bootstrap.sh
sudo ./b2 toolset=clang -j10 install

生成并安装CMake 3.14.5。调整--parallel=并-j匹配您的机器。如果您没有足够的RAM用于所使用的内核数量,则会发生不好的事情:

cd ~
wget https://github.com/Kitware/CMake/releases/download/v3.14.5/cmake-3.14.5.tar.gz
tar xf cmake-3.14.5.tar.gz
cd cmake-3.14.5
./bootstrap --parallel=10
make -j10
sudo make -j10 install

编译history-tools

cd ~
git clone --recursive https://github.com/EOSIO/history-tools.git
cd history-tools
mkdir build
cd build
cmake -GNinja -DCMAKE_CXX_COMPILER=clang++-8 -DCMAKE_C_COMPILER=clang-8 ..
git submodule update --init --recursive
bash -c "cd ../src && npm install node-fetch"
ninja

此时查看build目录则生成了fill-pg

配置使用

参考 https://eosio.github.io/history-tools/database-fillers.html
首次运行需要创建所需的表,执行参数--fpg-create
如果肺首次执行,需要清理,则添加参数--fpg-drop --fpg-create
--fill-connect-to 需要连接的state-history-plugin endpoint,默认值127.0.0.1:8080
我们先演示不加其他参数的运行实例,如果需要其他的参数,比如过滤,请查看文档--fill-trx

测试运行的命令行参数如下

export PGUSER=       // PostgreSQL用户名
export PGPASSWORD=   // PostgreSQL密码
export PGDATABASE=   // PostgreSQL数据库名
export PGHOST=       // PostgreSQL访问host
export PGPORT=       // PostgreSQL端口

./fill-pg --fill-connect-to 127.0.0.1:8080 --fpg-create

连接后创建的表如下

表名 介绍
account
account_metadata
action_trace
action_trace_auth_sequence
action_trace_authorization
action_trace_ram_delta
action_trace_v1
block_info
code
contract_index_double
contract_index_long_double
contract_index64
contract_index128
contract_index256
contract_row
contract_table
fill_status
generated_transaction
permission
permission_link
protocol_state
received_block
resource_limits
resource_limits_config
resource_limits_state
resource_usage
transaction_trace

测试

查看端口有没有正常监听

sudo lsof -i -P -n | grep LISTEN

如果想测试history节点ws端口可以使用

wget https://github.com/vi/websocat/releases/download/v1.6.0/websocat_arm-linux-static
mv websocat_amd64-linux-static websocat
./websocat ws://127.0.0.1:8080/

注意

目前history-tools方案还处于试验阶段,目前测试遇到问题
https://github.com/EOSIO/history-tools/issues/103
等待后面有时间再继续跟进