![]() ![]() It's definitely not trivial to deploy blockscout and would require more time. The last two days I was looking more into this error but couldn't find out why it's happening. T04:29:11.546 application=indexer fetcher=block_catchup first_block_number=18 last_block_number=9 ** (FunctionClauseError) no function clause matching in _to_elixir/1 I was able to compile it (it's written in Elixir and a Dockerfile is avaliable) but not to run it. Since Avalanche exposes a geth-compatible JSON-RPC API it should work. The Avalanche team is maintaining a private fork of it, which they are not interested in making public. The avalanche mainnet instance is running at. This could also cause the issue Blockscoutīlockscout is a blockexplorer for EVM-based blockchains. The blockexplorer uses this lib, but I haven't switched it to using the fork yet. To make it easier for deployments on our servers I've written a Dockerfile for it which I'm testing locally.Ĭorey made a fork of avalanchejs ( ) with changes for statalanche. I had to install and configure rvm to be able to run this on my machine as it requires a specific version of node. Running it is a bit annoying since it's based on node.js. My current assumption is that it uses the blockscout API to get this info. What is not working is to show the transactions and top assets. ![]() ![]() ) and show details for all validators (how much is staked, node-ID. I have this running locally and it can display the network activity (24h Volume, Number of Validators, Total Staked. Basically they hardcode the network ID (1 is mainnet, statalanche ID is 115110116) in a lot of places which needs to be replaced. We have a fork with fixes here corpetty/avalanche-explorer#1. For comparison the Avalanche mainnet instance is running at The Avalanche Explorer connects to an instance of avalanchego and the Ortelius REST API. It will make it much easier to manage all services. I'm writing a custom docker-compose file based on the ones I've been using for testing. I have it running locally and it's ready to be deployed. I've checked the data in the Kafka topics as well as MySQL and the indexing works fine. For now it's at dockerhub but they have pull/push limits on the free plan and we need to switch to another registry sometime. It deploys an avalanchego node as part of it and I had to build the docker image for our avalanchego fork and push it to a registry. A custom docker-compose file is enough.įor testing I've been using the docker-compose files in the repo and modified them a little. From what I can tell we don't need to make any code changes. We have a Ortelius fork with changes for our statalanche network at corpetty/ortelius#1. It would fit on one of the current node servers, so we don't need to provision another one. The processes are mostly idle and don't consume many resources. Redis caches index queries for the API.įor our test network this stack is definitely too much since we don't have much going on. Zookeeper is required by Kafka for replication. The API connects to MySQL and exposes the data via REST API. The consumer reads those topics and indexes it into MySQL. The producer connects to an avalanchego node and puts the events into Kafka topics. It runs 3 processes: producer, indexer, api and requires Kafka, Zookeeper, Redis, MySQL. This is the indexer which indexes all consensus events, decisions and chain transactions. The blockexplorer stack has 3 parts which I'll describe in more detail below: This issue is to document my research on the Avalanche blockexplorer and tracking the progress on deploying it for our statalanche network. ![]()
0 Comments
Leave a Reply. |