For billing questions see our Aptos Billing FAQ.
FAQ
How do I view my REST API requests, errors, and latency?
Visit the “REST API Insights” tab of the Aptos Node Service page in your Shinami dashboard.
- Filter by network, access key, method, and/or time range. Changing these filters re-fetches all the insights on the page.
- Summary metrics for a quick look at your error rates. Of course, a low error rate over a long period could hide one day that had a high error-rate, so it’s worth occasionally looking at the graphs below. Looks like there’s a significant percentage of non-rate-limit errors for my test account, so I should look at the graphs below to see which errors I’m getting
- If you hover over a section of a chart bar, you’ll see details about what it represents.
- You can click items in the legend to hide or show them.
- There are more graphs below! Scroll down to view error and latency insights.
What each graph shows
- Request count by method: This shows the count of all requests, including all errors, broken down by method (assuming you’ve selected “All methods” in the filter at the top of the page).
- Error count by Aptos error code: This shows the count of requests that got an Aptos error code, grouped by code. If you filter the page by an individual method, e.g.
POST /transactions
, you’ll only see the errors you received on that method. This graph and the next pair well with our Error Guide’s sections on the Aptos Rest API. - Error count by HTTP error code: This shows the count of requests that got an HTTP error code, grouped by code. If you filter the page by an individual method, e.g.
POST /transactions
, you’ll only see the errors you received on that method. - Error ratio by method: This shows the error ratio for each method. For example, if you sent 100
POST /transactions
requests in a time bucket (e.g. a day) and you got an error on two of them, your error ratio would be2%
for that method in that time bucket. If you see a high error ratio it’s useful to scroll up to the “Request count by method” graph to see how many of that method you sent in a time bucket: a 100% error ratio could happen if you send one request and it gets an error. - Requests latency by method - P50: This shows the 50th percentile latency for each method across all of its requests, including errors. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
- Requests latency by method - P95: This shows the 95th percentile latency for each method across all of its requests, including errors. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
- Successful requests latency: This shows the 50th and 95th percentile latencies for all of your successful requests. It does not group by method. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
- Requests with non-rate-limit errors latency: This shows the 50th and 95th percentile latencies for all of your requests that got an error (excluding rate-limit-errors since they have very low latency). It does not group by method. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
How do I view my GraphQL API requests, errors, and latency?
Visit the “GraphQL Insights” tab of the Aptos Node Service page in your Shinami dashboard.
- Filter by network, access key, method, and/or time range. Changing these filters re-fetches all the insights on the page.
- Summary metrics for a quick look at your error rates. Of course, a low error rate over a long period could hide one day that had a high error-rate, so it’s worth occasionally looking at the graphs below.
- If you hover over a section of a chart bar, you’ll see details about what it represents.
- You can click items in the legend to hide or show them. This works when there are two or more legend entries.
- There are more graphs below! Scroll down to view error and latency insights.
What each graph shows
Request metrics- HTTP requests by status code: All of your requests, grouped by HTTP response code.
- GraphQL requests: The count of your GraphQL requests, grouped by whether or not there was at least one GraphQL error in the response. This number will not equal your HTTP request count if you sent any HTTP requests with multiple GraphQL requests or if you had certain non-HTTP 200 response codes where we didn’t try to process the request body, like a HTTP 429 rate limit.
- GraphQL request error ratio: The percentage of your GraphQL requests that contained GraphQL errors. For more information on errors, see our Error Guide’s section on Aptos GraphQL API.
- HTTP request latency: The 50th percentile and 95th percentile latency of your HTTP requests. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
- Compute Units: The total compute units of the requests you sent in the time bucket. For more on compute unites, see below.
ledger_infos
in "query MyQuery { ledger_infos { chain_id } }"
. We preface queries with a q:
, so the above query would show up as q:ledger_infos
in these graph legends. If you send a root field that’s invalid, we map it to q:_other
. Toggle: Since a GraphQL request can contain multiple root-field queries, we provide a toggle to “Show metrics for HTTP requests where only one root field was present”. This is so that you can see the time it takes to process just that one specific query. Here is an example of a query with two root fields (ledger_infos
and processor_stats
).
- Occurrences: how often we ran a given root-field query for you (including responses with GraphQL errors).
- Error ratio: what percentage of the time did HTTP 200 requests containing this root field have GraphQL errors? For more information on errors, see our Error Guide’s section on Aptos GraphQL API.
- HTTP request latency containing root field (P50): How long did the 50th percentile of HTTP 200 requests containing this root field take (including responses with GraphQL errors).
- HTTP request latency containing root field (P95): How long did the 95th percentile of HTTP 200 requests containing this root field take (including responses with GraphQL errors).
- If you have the toggle on to show HTTP requests when only one root field was present, you’re measuring the latency of this exact query (because that’s the only operation the HTTP request did). If you have the toggle off, you’re measuring the latency of HTTP requests that contain this query (but could contain others as well, in which case it’s not a direct measure of this specific query’s latency).
- This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
What are QPS and CUPS?
QPS: “Queries Per Second” We define this as the count of the unique HTTP requests you send per second. This is a count of all requests, including those with any kind of error (e.g. a rate-limit error). In your Shinami dashboard, you can…- see if you hit any QPS rate limits. Look for rate-limit error codes for (HTTP 429 )
- get an approximation of recent QPS by looking at your Node Service API Insights in the dashboard and selecting a specific access key - or all keys on a network - and the time range “Last 60 minutes”. Then, take your highest bar and divide it by 60 (since each bar represents 1 minute).
- see if you hit any QPS rate limits. Look for rate-limit error code HTTP 429.
- get an approximation of recent CUPS by selecting a specific access key - or all keys on a network - and the time range “Last 60 minutes”. Then, take your highest bar and divide it by 60 (since each bar represents 1 minute).
How do I monitor my request and CU count
REST API request count by day
Visit the “REST API Insights” tab of the Aptos Node Service page in your Shinami dashboard. Choose the network you want (e.g. Aptos Mainnet”) + the time range of “Last 30 days UTC”. Then, look at the graph: “Request count by method”. This shows the total requests you’ve sent - including those with errors - broken down by the specific request name.
GraphQL API CU count by day
Visit the “GraphQL Insights” tab of the Aptos Node Service page in your Shinami dashboard. Choose the network you want (e.g. Aptos Mainnet”) + the time range of “Last 30 days UTC”.

How do I monitor my rate-limits and QPS / CUPS?
REST API rate-limits and QPS
Visit the “REST API Insights” tab of the Aptos Node Service page in your Shinami dashboard. Choose the network and time range you want (e.g. Aptos Mainnet” + “Last 30 days UTC”) and look at the summary metrics at the top:
HTTP 429
) they’d show up here. Those HTTP 404s
could be something like account_not_found
errors).

85/60 = 1.4 per second average QPS
during my peak in the last hour. Since this the average across a minute, individual seconds may have peaked a bit higher.

GraphQL API rate-limits and CUPS
Rate limits Visit the “GraphQL Insights” tab of the Aptos Node Service page in your Shinami dashboard. Choose the network and time range you want (e.g. Aptos Mainnet” + “Last 30 days UTC”) and look at the summary metrics at the top:
X
. For me, I see 4074 requests the week of Feb 24, at a CU of 120,479. 120,479 / 4074 = 29.6 CU
average per request.
This may not be the same for you! Not all requests have the same complexity, and the complexity of your requests may change over time. Also, we may make adjustments to our calculation of CU as we gather more data about the complexity of different queries. So, if you use this number as a guideline you should increase it to be safe - say, double it (so, assume your CUPS per request is 2x your average). Finally, we recommend checking on your CU per request from time to time to see if it’s changing.
A simpler solution, though, is just to assign all the CUPS allotment you have to your active keys. Either way, you should monitor for, and retry with a backoff on, rate-limit errors.

