This document outlines how rug pull index's algorithm is ranking data sets.

When ranking all data sets sold on Ethereum, the first step is acquiring meta data of all markets on Ocean Protocol and Big Data Protocol. Both projects host a version of the oceanprotocol/market. They take their market data from an off-chain meta data cache that's called Aquarius.

The following sources are used to collect all relevant data to generate a ranking:

- From Aquarius, we download the metadata of all data sets (BDP & OP). It includes names, symbols, prices and the liquidity pool address.
- From ethplorer.io and covalent, we download the top token holders for each liquidity pool.
- From
oceanprotocol/list-purgatory,
we download both the
`list-accounts.json`

and`list-assets.json`

. - From coingecko, we download the latest price of OCEAN/EUR and BDPToken/EUR.

We call the software that downloads all this meta data the "crawler". In the past, there have been nights where the crawler failed. This was when the main page wasn't showing any reasonable results. When the crawl fails an administrator can manually login to fix the problem by e.g. re-running the crawl.

Once the crawl has finished, we perform a number of calculations that flow into the final ranking.

The Ocean Protocol uses Automated Market Makers (or short: "pools") of Balancer's version 1. When staking e.g. QUICRA-0/OCEAN, the balancer pool returns a separate ERC20 token that represents a staker's liquidity in the pool.

As we want to rate a data set's factor of decentralization and hence the potential (or risk) it has of a single user "pulling the rug" - for each data set -, we calculate a gini coefficent from the pool's population of liquidity providers.

We define the gini coefficient $G$ to be the relative mean absolute difference over a population of liquidity providers in a balancer pool. If $x_i$ is the amount of absolute pool tokens a liquidity provider $i$ owns in a pool, for $n$ providers, we first find the mean absolute difference of all liquidity provider's stakes $MD$. To then get $G$, we divide it by the arithmetic mean of all liquidity provider's stakes $AM$:

$\begin{aligned} MD &= \displaystyle\sum_{i = 1}^n \sum_{j = 1}^n | x_i - x_j | & (1) \\[0.7em] AM &= 2n^2\bar{x} & (2) \\[0.7em] G = \frac{MD}{AM} &= \frac{\displaystyle\sum_{i = 1}^n \sum_{j = 1}^n | x_i - x_j |}{2n^2\bar{x}} & (3) \end{aligned}$

Since we've discovered an unfair
advantage for pools with many
small liquidity providers, for data sets with **greater than** 100 liquidity
providers we only consider pool shares of **more than** 0.1% in the above
calculation. For pools with **less than** 100 LPs, a stake counts towards the
gini coefficient $G$ only when its share is **more than** 1% of the pool.

As outlined in the section "Data Acquisition", a data sets price in OCEAN or BDPToken is downloaded from Acquarius. The latest tickers for OCEAN/EUR and BDPToken/EUR are downloaded from coingecko. The EUR value $v$ of an OCEAN data asset which price is $p_{d}$ is then calculated by multiplying it by the price of an OCEAN $p_{o}$:

$v = p_{d} \cdot p_{o}$

rug pull index fetches and calculates - as previously mentioned - a set of meta data daily. Given an exemplary market of only two data sets $A$ and $B$, over a timespan of two days, a fictional state of rug pull index's data base could look like this:

name | liquidity (EUR) | gini coefficient | date |
---|---|---|---|

A | 10 | 0.5 | 2021-06-22 |

B | 2 | 1 | 2021-06-22 |

A | 5 | 0.24 | 2021-06-23 |

B | 7 | 1 | 2021-06-23 |

To now calculate a daily ranking, considering the data of both days (June 22 and June 23), rug pull index does the following: In the "liquidity (EUR)" column, the overall highest value ("max value") is selected by searching the data base (it is "10" on June 22). Additionally the smallest gini coefficient ("min value") is determined ("0.24" on June 23 for). These "min" (gini coefficient) and "max" (liquidity (EUR)) values are then used as benchmark to evaluate the current day's data sets. Assuming the date to be June 23, 2021 we'd calculate the overall ranking of $A$ to be:

$A = \frac{5}{10} \cdot \frac{0.24}{0.24} \cdot 100 = 50\%$

and $B$:

$B = \frac{7}{10} \cdot \frac{1}{0.24} \cdot 100 = 2,91\%$

Hence, the main page's ranking would yield the following table for June 23, 2021:

rank | name | ranking | liquidity (EUR) | gini coefficient |
---|---|---|---|---|

1 | A | 50% | 5 | 0.24 |

2 | B | 2.91% | 7 | 1 |

Generally speaking, on a day $d$ the rating $r_d$ of a data set, can be calculated using the overall minimal gini coefficient $g_{min}$, the overall maximal liquidity in EUR $l_{max}$, the data set's current liquidity in EUR $l_d$ and the data set's current coefficient $g_d$:

$r_d = \frac{l_d}{l_{max}} \cdot \frac{g_d}{g_{min}} \cdot 100$

We pledge to update this specification with every update made to rug pull index's algorithm. This document was last updated on June 22, 2021.

If you have questions or feedback with regards to this document, please contact us.