Node Service

Sui Node Service FAQ

Note: for billing questions see our Sui Billing FAQ.

FAQ

How do I view my API requests, errors, and latency?

Visit the Sui Node Service page to see your API Insights

Here are some tips:

  1. Filter by network, access key, method, and/or time range. Changing these filters re-fetches all the insights on the page.
  2. Summary metrics for a quick look at your error rates. Of course, a low error rate over a long period could hide one day that had a high error-rate, so it's worth occasionally looking at the graphs below. My test account has a lot of rate limit errors (I need to assign more QPS to one of my access keys, which I'd set to 1)!
  3. If you hover over a section of a chart bar, you'll see details about what it represents.
  4. You can click items in the legend to hide or show them.
  5. There are more graphs below! Scroll down to view error and latency insights.

What each graph shows

  1. Request count by method: This shows the count of all requests, including those with JSON-RPC errors, broken down by method name (assuming you've selected "All methods" in the filter at the top of the page).
  2. Error count by JSON RPC error code: This shows the count of requests that got a HTTP 200 and a JSON-RPC error, broken down by error code. If you filter the page by an individual method, e.g. sui_multiGetObjects, you'll only see the JSON-RPC errors you received on that method. This pairs well with our Error Guide's section on the Sui Node Service.
  3. Error ratio by method: This shows the JSON-RPC error ratio for each method. For example, if you sent 100 sui_multiGetObjects requests in a time bucket (e.g. a day) and you got a JSON-RPC error on two of them, your error ratio would be 2% for that method in that time bucket. If you see a high error ratio it's useful to scroll up to the "Request count by method" graph to see how many of that method you sent in a time bucket: a 100% error ratio could happen if you send one request and it gets an error.
  4. Requests latency by method - P50: This shows the 50th percentile latency for each method across all of its requests, including errors. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
  5. Requests latency by method - P95: This shows the 95th percentile latency for each method across all of its requests, including errors. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
  6. Successful requests latency: This shows the 50th and 95th percentile latencies for all of your successful requests (no HTTP or JSON-RPC error). It does not group by method. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.
  7. Requests with non-rate-limit errors latency: This shows the 50th and 95th percentile latencies for all of your requests that got a JSON-RPC error (excluding rate-limit-errors since they have very low latency). It does not group by method. This is the latency within our system, so the latency you observe will be a little higher because of network travel time.

What is QPS?

QPS: "Queries Per Second" We define this as the count of the unique HTTP requests you send per second. This is a count of all requests, including those with any kind of error (e.g. a rate-limit error).

In your Shinami dashboard, you can...

  • see if you hit any QPS rate limits. Look for rate-limit error codes for (JSON-RPC -32010 for Sui).
  • get an approximation of recent QPS by looking at your Node Service API Insights in the dashboard and selecting a specific access key - or all keys on a network - and the time range "Last 60 minutes". Then, take your highest bar and divide it by 60 (since each bar represents 1 minute).

How do I monitor my request count?

Daily request counts by network: On the Sui Node Service page of your dashboard, choose the network and time range you want (e.g. "Sui Mainnet" + "Last 30 days UTC"). Then, look at the graph: "Request count by method". This shows the total requests you've sent - including those with errors - broken down by the specific request name.

How do I monitor my rate-limits and QPS?

Rate limits: On the Sui Node Service page of your dashboard, choose the network and time range you want (e.g. "Sui Mainnet" + "Last 30 days UTC") and look at the summary metrics at the top:

Here, I can see that in the last 30 days on Testnet, 0.21% of my requests had a rate limit error. That's not a huge number, and I retry a requests when I get a rate limit, so this is okay.

However, there's one more place to check. Scroll down to look at the graph: "Error count by JSON RPC error code".

Our rate limit code is -32010 for requests to the JSON-RPC API and 32011 for attempting to make a websocket subscription when you've reached the limit for your access key. If you hover over a section of a bar, you'll see the code and the total count for the section as shown in the image above.

Here, I see that most or all of my rate limiting has come recently. That's something worth at least keeping an eye on and potentially looking into further now. Some things to check:

  1. Has my overall request volume / user activity increased recently? If so and if it continues, I might expect more rate-limiting at my current settings.
  2. Did my integration or application recently change in a way that could be sending a lot more of a certain request type compared to before? You can also filter this page by a individual methods and API keys - for example to see if your frontend or your backend key is being rate limited.
  3. Have I assigned all the QPS I can to my keys, or is there more to assign? If I've assigned all I can and I'm using more than one key per network, can I shift some QPS between my keys (for example, between my frontend and backend key)?

Current QPS

The most granular QPS metrics we provide is for the last hour. On the Sui Node Service page of your dashboard, choose the network you want to look at and the time range "Last 60 minutes" (note that you can also filter by API key as needed). Then, look at the "Request count by method" graph. Take the highest bar and divide by 60 because each bar is one minute. In the example below, my peak is just above 400. I'll estimate 410, and 410/60 = 6.8 per second average QPS.

Of course, this estimate is only useful to the extent that the current activity is reflective of average traffic or of a peak. Still, you can get a sense of whether your keys have a decent amount of headroom given your normal or current traffic. Looking at the "Last 48 hours" view first can help tell you if this is normal traffic based on recent volume (but note that the last hour bar on the right is the current hour, and will fill up over the course of the hour).