Hacking the Aeternity Codebase
Building
See Build for details.
Dependencies
Ubuntu:
sudo apt install autoconf build-essential cmake erlang libsodium-dev libgmp-dev
Mac OS:
brew install erlang@24 openssl libsodium autoconf gmp cmake automake
The Aeternity build system uses Rebar3 to do the heavy work, and wraps this in a Makefile for ease of use. To hack on Aeternity you need some basic knowledge about Rebar3. See Quick Guide to Rebar for a comprehensive introduction.
Configuration files
You can use either
.json
or.yaml
to specify the user-level configuration. By default, the system looks for~/.aeternity/aeternity/aeternity.{json,yaml}
oraeternity.{json,yaml}
in the top directory. You can also set environment variables on the formAE__...
, e.g.AE__HTTP__CORS__MAX_AGE
. See [docs/configuration.md] for details.The system first reads the usual Erlang system configuration files (specific per release, in
_build/prod/rel/aeternity/releases/*/
). These are generated from the corresponding source files underconfig/
:vm.args
for Erlang VM options.sys.config
for overriding the Erlang application defaults (the.app
files).
Running
See Operation for details.
Starting the system with an Erlang terminal prompt
Opening an Erlang shell for running a unit or integration test
Rebar lets you open an Erlang shell with one or more profiles applied, such as test
. This sets up paths to test apps, etc., which will not be available in the default profile. By default all apps listed in the release spec will be started; to avoid this, specify --apps ""
:
or for system testing
The system can then be started manually from the Erlang shell like this:
after which you can do your testing. To clean up the temporary directory that was created, do:
Aeternity Code structure
How the system starts
There is no single function call to start the Aeternity system. There is a start script (
_build/prod/rel/aeternity/bin/aeternity
) which is generated by Rebar3 when you run e.g.make prod-build
ormake prod-package
. The package would typically be installed under~/aeternity/node
and it is assumed that you start the system from the install directory (or directly from the build directory). You typically run it asbin/aeternity daemon
orbin/aeternity console
.The start script is a modified version of the "extended start script" that Rebar3 would normally generate from its standard template. The source file for the Aeternity version is located in
scripts/aeternity_bin
. This file should be kept up to date with changes in the upstream Rebar3 version (which is part ofrelx
).
The start script starts Erlang with a custom boot file generated by Rebar3, named
start.boot
oraeternity.boot
(in_build/prod/rel/aeternity/releases/*/
). It is specific for each release and specifies exactly what processes and applications should be started when the Erlang system boots. (The.boot
file is in a binary format that the system can read at boot time without needing to load any other modules such as for parsing text. To see what the boot file does, look at the corresponding source filestart.script
(oraeternity.script
) instead.)That the system starts from a boot file means that applications not listed in the boot script will not be available in the code path when the system is running, even if they are normally in the standard Erlang/OTP distribution; e.g.,
debugger
,wx
, orparsetools
. If wanted, such extras must be added manually, or run from a separate Erlang instance.Multiple releases can be installed alongside each other, and the code paths in the boot script typically name the versions of the applications (in the
lib
directory under the installation root), so e.g. release 1.1 could be using the version"lib/xyz-2.0.3"
of applicationxyz
, while release 1.2 uses the version"lib/xyz-2.1.1"
. The start script (bin/aeternity
) picks the release version to use.
The
.boot
and.script
files are generated automatically by the release-building tools of Rebar3, using therelx
specification section of therebar.config
file. This is where you list all the Erlang applications that should be included in the release and started when the Aeternity system boots. (They will be "started" regardless of whether they actually start any processes. This loads the app configuration.) The start order in the.boot
file is made to obey the application dependencies found in the individual*.app
files (usually generated from*.app.src
files) that provide the per-application metadata: for example, theapps/aehttp/src/aehttp.app.src
file specifies thataecore
must be started beforeaehttp
can start. Hence, when the Erlang system boots, it will launch all specified applications in a suitable order, and when all are running, the system is up. There is no specific single entry point to the system.The boot script also includes the app configuration from the
{env, ...}
sections of the.app
(or.app.src
) files at build time, and sets these configurations as the system boots. Modifying the.app
files in the installed system has no effect. Use thesys.config
or command line options to override the build-time configuration.Furthermore, Rebar3 doesn't rebuild dependency apps (under
_build/default/lib
) if they get modified, so updating e.g._build/default/lib/lager/src/lager.app.src
will have no effect onlager.app
(and hence not on the produced release build) - you must delete the existing_build/default/lib/lager/ebin/lager.app
file to force Rebar3 to rebuild it.
The (nonstandard)
setup
application, which is assumed to start very early in the boot sequence, provides extra startup configuration magic:It scans all application configurations for entries with the key
'$setup_hooks'
, specifying callback functions to be executed bysetup
. (See e.g.aeutils.app.src
.)Because the boot script loads all application configurations and modules before it starts the first application, the full list of applications and their configuration is known when
setup
is started.The Aeternity system uses these callbacks to read the
aeternity.{yaml,json}
(aeu_env:read_config()
) file, inject overrides from OS environment variablesAE__...
(aeu_env:apply_os_env()
), and load plugins (aeu_plugins:load_plugins()
) before the rest of the system starts, as well as perform sanity checks on configurations (aecore_env:check_env()
, etc.).
It has "smart"
get_env()
functions which can perform advanced variable expansion on the configuration values. E.g., if an applicationx
has a configuration entry{log_dir, "$HOME/log"}
, then callingsetup:get_env(x, log_dir)
will return something like "/home/username/log". (This only works on variables defined viasetup
itself, not general shell environment variables -$HOME
is a predefined special case.)Setup has a configuration option
data_dir
which the Aeternity system uses to know where its database is located. The directory needs to already exist and be populated at system start, else the startup fails.
The (nonstandard)
app_ctrl
application provides additional control over the start order in the system. Normally, the applications are started in the order listed in therelx
specification ofrebar.config
, modified to obey the dependencies listed in the individual.app.src
files. This means that applications can specify that they must be started after other applications that they know about and depend on, but applicationx
cannot specify that it needs to start before another applicationy
which is unaware ofx
and whose dependencies (in its.app
file) cannot be modified.The
app_ctrl
app hooks into the kernel application, which is always the first to start, by configuringapp_ctrl_bootstrap
to run as (dummy) handler of the logger functionality in the kernel. This is done in thesys.config
. When the kernel app starts, this launches theapp_ctrl_server
process (but not theapp_ctrl
application itself).The
app_ctrl_server
looks for configuration both in the normalapp_ctrl
app environment, and by scanning other applications for entries with the key'$app_ctrl'
(using functionality fromsetup
; see above). In Aeternity, this can be found in theaecore.app.src
file.The
app_ctrl
configuration can specify per application that the app needs to be started before certain other apps. It can also define "roles", which are sets of apps, and "modes", which are sets of roles. Applications that are not explicitly mentioned in the configuration are left to the standard application controller.If you try to make an application in Aeternity depend on (start after) one of those applications that are managed by
app_ctrl
, such asaehttp
, then you will get a crash during startup with error messages containing{orphans,[...]}
andapps_not_found: [...]
. To fix this you must also add your app to the same "roles" in the'$app_ctrl'
section ofaecore.app.src
.When the real
app_ctrl
application is finally started, it just sets up a supervisor and a worker process which acts as a proxy that links itself to the already runningapp_ctrl_server
process, so that the application crashes if the server process crashes.
Logging is done via the Lager app. A handler
aeu_lager_logger_handler
for the standard OTP logger is also set up in thesys.config
, which forwards standard log messages to Lager.The
aeutils.app.src
file configures a hook for thesetup
app, making it callaeu_logging_env:adjust_log_levels()
whensetup
starts. (Note thataeutils
configuration must thus be loaded beforesetup
runs, which it will be when running from a boot script.) This will also callaeu_logging_env:expand_lager_log_root()
to ensure thatlager
has itslog_root
configuration set, usingsetup:log_dir()
as the default. Furthermore it rewrites the log root setting to be an absolute path, to ensure that the logging is not affected by changes to the current working directory of the Erlang VM during execution.As soon as lager starts, it will create the log directory and all log files using its current configuration.
Since
setup
andlager
don't know about each other's existence, their.app
files do not specify any dependency between them. Their relative order in therelx
specification thus decides their actual order in the boot script.The
lager
configuration insys.config
sets up both a handler that writes to the console, and a handler that writes to theaeternity.log
logfile. It also configures additional logging sinks, for which corresponding modules are generated dynamically, so that the sink whose name isepoch_mining_lager_event
can be used by callingepoch_mining:info(...)
, and so on. Hence you will not find a source module namedepoch_mining.erl
in the codebase. Most of these extra sinks will not log to the console, only to log files.
Main applications (in reverse start order), most under the main repo (
github.com/aeternity/aeternity.git
) under theapps
directory; the rest will be found under_build/default/lib
:aedevmode
(Something about keypairs for testing. Runs
aedevmode_emitter
.)
aesync
The aesync app just launches
aec_connection_sup
, which exists under the aecore application. It is a "supervisor for servers dealing with inter node communication"
aestratum
An implementation of server side part of the Stratum protocol. The purpose of the protocol is to formalize the coordination of information exchange between pool server and pool clients. See [docs/stratum.md]
aemon
Network monitoring (disabled by default). Uses statsd backend provided by
aec_metrics.erl
. See [docs/monitoring.md]
aehttp
The HTTP API. This app doesn't have any actual child processes of its own. It just starts Cowboy endpoints.
The Cowboy setup is done in
aehttp_app
which callsaehttp_api_router
to get the endpoint data.The endpoints are specified in
apps/aehttp/priv/oas3.yaml
which is used to generate callback modulesoas_endpoints
,endpoints
(the old Swagger version), androsetta_endpoints
.aehttp_api_router
calls these to get the data, and then filters it depending on what should be enabled. Note that the importantenabled_endpoint_groups
setting is computed inaehttp_app:check_env()
, which runs from asetup
hook defined inaecore.app.src
.
All endpoints enter via
aehttp_api_handler:handle_request_json()
which dispatches to one of the modulesaehttp_dispatch_ext
,aehttp_dispatch_int
, oraehttp_dispatch_rosetta
. These may reject a request if the system is overloaded, or put it in a run queue for later (seeaec_jobs_queues
).aehttp_api_handler
also does the conversion between JSON-as-text and JSON-as-Erlang-terms for request inputs and outputs, using thejsx
library.For example, the request
GetCurrentKeyBlockHeight
is actually handled in the moduleaehttp_dispatch_ext
, like this:
aechannel
State channels (
aesc_...
). The aesc_fsm.erl state machine is described by the PlantUML file [docs/state-channels/fsm.puml].
aeapi
A single facade module for the internal API functions. Does not launch any child processes, or even any supervisor.
aecore
The Core Aeternity Application supervisor tree. Runs the
aec_worker_sup
,aec_consensus_sup
, andaec_conductor_sup
. It used to run theaec_connection_sup
as well, before that was moved to theaesync
app.aec_worker_sup
Runs
aec_metrics
,aec_keys
, andaec_tx_pool
aec_consensus_sup
Initially empty
aec_conductor_sup
Runs
aec_conductor
andaec_block_generator
aecli
The CLI, based on
ecli
. The supervisor is started with no children.
aefate
The FATE virtual machine. A library application, does not start any processes.
ecrecover
(github.com/aeternity/ecrecover.git
)Library for verifying Ethereum signatures.
aega
Library for Generalized Accounts
aeprimop
Library for primitive operations to modify chain state objects.
aeoracle
Library for Oracles.
aens
Naming System library.
aecontract
Library for Contracts
aevm
Other VM (Aethereum?)
aebytecode
(github.com/aeternity/aebytecode.git
)Library and standalone assembler for Aeternity bytecode, supporting both AEVM bytecode and FATE bytecode.
aeserialization
(github.com/aeternity/aeserialization.git
)Serialization helpers for Aeternity node.
aetx
Library for Transactions ADT
aeutils
Library with various utility functions. Starts a supervisor with no children.
aeminer
(github.com/aeternity/aeminer.git
)Erlang library to work with CPU and CUDA cuckoo miners.
aecuckoo
(github.com/aeternity/aecuckoo.git
)Cuckoo CPU miner binaries.
Last updated