GameSparks Best Practices

Some options work better than others to evolve your game backend while maintaining performance as you scale up to meet customer demand. In this topic, you'll learn some best practices to minimize latency, use resources efficiently, and maintain scalability. You'll also learn how to analyze game performance.

Use Tools to Analyze and Optimize Performance

The following tools can help you analyze and optimize the performance of your cloud code.

For a deeper look at each tool and more, refer to Profiling Cloud Code Performance. Now, let's look at some individual service components and their best practices for building games at scale.

Use the Game Data Service

In general, you should use the Game Data Service to store custom data generated at runtime. This service is designed for fast and efficient operations at scale. We recommend that you use this service instead of MongoDB whenever possible. For information about the Game Data Service, see Game Data - Setting up Indexes.

Split Data into Smaller Documents & Data Types

When your Cloud Code updates a document, it must wait until the update is complete in memory before initiating another update. To avoid bottlenecks like this, factor your data into multiple documents instead of a monolithic document.

For example, a game includes an auction house where players can sell items to other players. An anti-pattern to avoid is storing all the auction data in a single type. A better design is to store only the data required to conduct the auction in the AuctionHouse data type. Store the details about items (weapons, armor, potions, and so on) in separate documents. When a player has purchased and inspects the item, such as a dagger, your Cloud Code looks up the details about this weapon by its itemName.

Handle Errors to Avoid Losing Data

When your Cloud Code accesses the database, you must handle errors thrown by the GameSparks APIs.

If your cloud code does not handle errors, you can lose data from the database.

Make sure that your Cloud code handles errors the following APIs.

When you catch an error:

  1. Use scriptError().

  2. Call Spark.exit() to stop script execution.

  3. If you need to retry your operation, catch the error on the client and re-send the API call from there.

Here are some basic examples of how to catch errors:

//catch getItem error, set script error and exit script execution
if(API.getItem(type, id).error()){
    Spark.setScriptError("Error: ", error);
    Spark.exit();
}

//catch persistor error, set script error and exit script execution
var document = API.createItem(type, id);
if(document.persistor().persist().error()){
    Spark.setScriptError("Error: ", error);
    Spark.exit();
}

//catch persistor error, set script error and exit script execution
var document = API.createItem(type, id);
if(document.persistor().persist().error()){
    Spark.setScriptError("Error: ", error);
    Spark.exit();
}

//catch queryItems error, set script error and exit script execution
if(API.queryItems(type, query).error()){
    Spark.setScriptError("Error: ", error);
Spark.exit();
}

Use Indexes with MongoDB Runtime Collections

Important: We recommend using Game Data Service instead of MongoDB Runtime Collections, which are deprecated. Game Data Service also provides high-scale performance. For more information, see Game Data Service.

If you can't yet switch to the Game Data Service, then uses indexes in your MongoDB data. If you use indexes, your queries can run up to 90% faster. (Without indexes, some queries waste time and resources because MongoDB must scan each document in its collection.)

Following are some best practice to use indexes with MongoDB runtime collections.

Add an Index to Your Runtime Collection

Adding an index to a runtime collection is a simple process. For example:

//reference the playerChatHistory runtime collection
var playerChatHistoryCollection = 
Spark.runtimeCollection('playerChatHistory');
//index the "dateOfChat" field
playerChatHistoryCollection.ensureIndex({"dateOfChat" : 1},
{"background": true});

We recommend that you put your indexing code in the Game Published system script. This approach ensures that the system adds your indexes to your live runtime collections when you publish a new live snapshot.

Tip: It is not advised to drop or replace an existing index. Instead, create a new indexed field and store the data there.

For more information, see Creating Game Collection Indexes.

Verify Your Index

After you've added an index, verify that it works. Navigate to Data Explorer → Runtime Collection → Index → Get Indexes to verify that your index appears in the list.

Get Statistics to Check Your Query

To check that your queries are using indexed fields, use the Explain function in the Data Explorer.

  1. Open the runtime collection that you want to check.

  2. Build a query using dummy data and choose Explain.

  3. Navigate to executionStats → executionStages → stage → inputStage.

    Statistics on the completed query appear.

Check Your Query Statistics for Index Usage

In most cases, the statistics show one of the following results.

Check Your Query Statistics for Data Efficiency

You should check all queries for data efficiency before deploying them to a production environment. In the statistics that you collected previously, read the values of the following:

If the ratio of documents scanned to those returned is high, we recommend that you add an index to lower the ratio to improve efficiency and performance.

Run Faster Queries Using a Time to Live Index

Time to Live (TTL) indexes keep your runtime collections smaller by releasing documents when they are no longer needed. As a result, your queries and other operations run faster. You can add TTL indexes only to date fields.

In the following example, we apply a TTL index to the date field and set a limit of 60 seconds until all documents that include this field are released.

//Insert a document in the TTL runtime collection
var testData = Spark.runtimeCollection("TTL").insert(
    {
        "testKey":"testValue",
        "date": new Date() //only date types can have a TTL
    }
)

//Index creation with TTL
//Place this in your Game Published system Cloud Code script
var ttlCol = Spark.runtimeCollection('myCollection');
ttlCol.ensureIndex({"date" : 1},{ expireAfterSeconds: 60 });

Other MongoDB Best Practices

Use limit(), sort() and skip() in Cloud Code

You can use helper functions to improve the efficiency of find() operations.

Consider Drawbacks and Benefits of Some Operators and Expressions

MongoDB API provides a number of operators for performing complex calculations on data sets. We recommend the following best practices.

Accessing GameSparks System Collections

Do Not Directly Query System Collections

We don't support direct queries to GameSparks system collections. We occasionally change these collections as needed to support GameSparks development. As a result, your queries against these collections might yield unexpected outcomes. Also, you cannot get the performance benefit of indexes from system collections. We recommend that you use the Game Data Service, which provides faster queries and more predictable results.

Use the Cloud Code API to Access System Collections

You can read data from GameSparks system collections by using supported Cloud Code APIs. Use the Spark object and the Spark requests to retrieve data values instead of writing your own custom logic to query the system directly.

For example, don't use this pattern:

//Direct query = bad!
var team = Spark.systemCollection("teams").FindOne(myTeamId);

Instead use this pattern:

//Cloud Code API = good!
var team = Spark.getTeams().getTeam(myTeamId);

See API Documentation.

Tip: If the data you want to query in a system collection is not accessible via a Cloud Code API, or the query field is not indexed, mirror the data to a Game Data Type instead. See the Mirroring the Player Collection tutorial.

Mirroring the Player Collection tutorial.

Leaderboards

You can use leaderboards to drive social and competitive features in your game. You can display leaderboards to suit the design of different types of game. To make sure your leaderboards scale, you must use and archive partitions effectively.

Keep Partitions to a Minimum

You can use partitions to create new leaderboards as needed, based on any number of attributes. We recommend that you create your partition based on no more than one or two attributes.

For example:

It's possible to create a partition based on more than one or two attributes, but such overly specific partitions will result in too many partitions with very few entries. As a result, you might see lower performance from leaderboard APIs, and potentially decreases in player engagement.

See How to Partition Leaderboards.

Use Partitions Instead of Dropping Leaderboards

The Leaderboard.drop() API can be expensive if you have a lot of entries in the leaderboard. Use leaderboard partitions instead of this function for better performance.

Archive Old Leaderboards

To maintain high leaderboard performance, archive leaderboard partitions that you no longer need (for example, if no more scores will be added). By archiving your leaderboard partitions, you make them read-only, which frees up active resources.

For example, you might have a leaderboard partitioned by week. When a new week begins and the competition is reset, the previous week's rankings might no longer be relevant. In this case you can archive the partition.

Archived partitions can still be read with any leaderboard request or Cloud Code API as required. See archive.

See Archiving Leaderboard Partitions

Messaging

You can use messages to engage with your players. However, if messages are not configured correctly, your game's message service could be degraded due to the following.

To avoid problems such as these, we recommend the following best practices.

Set an Expire After (hours) Value

For each of your message types you should set a value for Expire After (hours) (don't leave it blank).

This field specifies how long each message persists in the playerMessage system collection.

We recommend that you set the value to zero when possible, otherwise specify a time limit that is no higher than necessary for the needs of your game.

If you specify... Then messages of this type persist... Players who were offline when the message was sent...
No value Indefinitely Always get the message. This can cause the problems mentioned above.
A non-zero value As long as the value you specify Offline players will get the message only if they log in before the expiration time.
Zero Not at all Do not get the message. This value causes the fewest problems for your message system. Use this value for messages that should be delivered only at a single moment. For example, in an online chat system.

Send Messages at Scale

Messaging is at the core of live game operations. For example, messages are vital for the beginning of a tournament, or to communicate a limited-time discount on in-game items. For events like these, you typically want to to inform all your players about the event using a push notification.

To send messages broadly to your players, we recommend that you do the following.

Schedulers & Bulk Jobs

You can use schedulers and bulk jobs to effectively and conveniently automate reoccurring tasks and perform bulk operations.

If you follow the below best practices, you will spread out CPU utilization and MongoDB operations to improve performance.

Schedule Operations in Batches

GameSparks provides three system scripts you can use to schedule the execution of your Cloud Code scripts every minute, every hour, or every day (at 00:00 UTC).

You can also schedule your own scripts in cloud code using the SparkScheduler class.

For example, a game has a tournament that ends every Sunday night. The development team uses the Every Day script to check for Sunday and then run their Cloud Code scripts on that day. Rather than rewarding all the players in a single script, they use SparkScheduler to reward players in batches with a delayed offset time between each batch. This batched operation design pattern avoids overloading the database and keeps the process performing well.

Include an Index field in Bulk Job Queries

Scheduled bulk jobs are an effective way to iterate over a group of players with the convenience of doing this work with a single action.

When you configure a bulk job, make sure that the query you pass into playerQuery contains at least one indexed field. Also, do not leave playerQuery empty, otherwise the system will scan the entire player collection.

For example, here's a player query that uses the lastSeen field, which is indexed:

{"lastSeen": { "$gte": {"$date": "2020-01-13T00:00:00.000Z"}},
"location.country":"US"}

For more about indexes, see Use Indexes with MongoDB Runtime Collections.

Note: Each game is limited to 20 scheduled bulk jobs. See System Limits for details.

Import Cloud Code Modules

Cloud Code Modules allow you to re-use code snippets and variables across multiple Cloud Code scripts. They are a great way to organize your code and avoid code duplication. How and when you import modules can have an impact on script performance.

if(thing){
    requireOnce("shortcode");
}

Manage your Concurrent User and API Request Limits

By default, your game is limited in terms of the number of concurrent connected users (CCU) and the number of API requests per second. For details on the limits applied to the games in the Standard tier, see Usage Limits and Fair Usage Policy. These limits are in place to protect the integrity of your live game to ensure an optimal experience for your players. By default your game is limited in terms of the number of Concurrent Connected Users (CCU) and the number of API requests per second.

Get Data on Concurrent Users and API Requests

To get the data, examine your game's script.log collection in the Data Explorer. Run a Find on the script.log collection using the following queries:

//CCU limit reached on live service
Query: {"SERVICE_LIMIT_EXCEEDED": "RUNTIME_CCU_LIVE"}
Sort: {"_id":-1}

//API limit reached on live service
Query: {"SERVICE_LIMIT_EXCEEDED":
"RUNTIME_API_REQUESTS_LIVE_SECOND"}
Sort: {"_id":-1}

In the above scripts, the sort field sorts the result by date created.

Prepare to Request Increases in the Limits

As your game audience grows, you might want to request an increase in these limits. We recommend that you understand your game's API usage and performance before requesting a limit increase. We evaluate your game on these factors before we grant a limit increase.

Your game can call APIs in different ways. The requests from all these approaches are tallied and limited as a whole. Methods of calling APIs include the following.

Request Increases

If your game is on the Standard Tier, you can request a limit increase by creating a support ticket and providing the following details: