GameSparks Cloud-Code is actually a very broad topic. Importing Cloud-Code will not be a case of copy-pasting your existing from one platform to another. In some cases these platforms are also even using a different programming language to GameSparks.

In this topic we will look at what the differences between GameSparks Cloud-Code and what these alternative platforms have to offer, and how to adapt your existing code to fit how these platforms approach Cloud-Code.

We have split this topic into a list of the required features needed to replicate most GameSparks Cloud-Code use cases:

  1. Create custom scripts This means the platform allows developers to create totally new APIs for their game and not just use the out-of-the-box or modify existing APIs. You will need to create something totally from scratch.
  2. Send and receive custom data to and from these scripts GameSparks developers do use Cloud-Code to get specific player information or static data from MetaCollections, but more often they need the server to perform some actions or validations before returning a result. We therefore need to send and receive custom data and not just execute some code on the server.
  3. Get/Set Player Data Script APIs should be able to load currently stored player data for validation, update this data and save it back to the player. Ideally, we should be able to do this for any playerId through the Script API, not just the player that called the script. We need this for things likeTeams or Friends features.
  4. Get Static Game Data - MetaCollections We need a replacement for GameSparks MetaCollections as these are widely used by developers to load static game data. This could include item tables or game-configuration data like live-ops events, localization, etc. We also want these alternative data stores to be cached so they are optimized for performance.
  5. Send HTTP Requests A very common use-case for GameSparks Cloud-Code is to connect to another service like a 3rd party payment, auth or other game service. The ability to send HTTP requests is very important for extending the current API offering of the platform.
  6. Receive HTTP Requests Similar to sending requests, if we can receive data from 3rd party services we can extend the platform ourselves. This is also very common for payment validation, advertisement campaigns and email validation.

Key Differences

As already mentioned, none of these alternative platforms have a way to directly port GameSparks code into their Cloud-Code system, so we have to work with what they offer. This section covers the main differences you need to be aware of for these platforms. We go into more details on the specific differences in each platform's own section.

Player Data

With GameSparks there are several methods for storing player data we need to consider. Most developers will also use a mix between these two methods which might not be adaptable to the new platform so this is something to consider.

Player ScriptData

This would be where you are storing your player data on the SparkPlayer object in either scriptData or privateData. This uses the player system-collection and is also cached for better performance. If you are storing your current data using the SparkPlayer API you will need something that can store custom JSON. Remember that you can also store this as a JSON string if the platform doesn't allow for JSON.

Make sure to check what the limits of this storage is. There is usually a max size for both JSON or strings depending on what the platform uses.

Database Storage

Not all of these platforms offer database APIs out-of-the-box like GameSparks does. This will make transitioning RuntimeCollections or Game-Data collections more difficult.

In these cases you may need to resort to mapping your player data for those collections to JSON objects which can be stored using the platform's conventional player storage. This will mean revising how your existing GameSparks database queries are fetching data. If they are mostly querying by the playerId, this should be no problem, however, more complicated queries like playerId and itemId could be solved by storing the data as an array or times or, to make it easier to access data by Id, you can store them as an object containing multiple objects, as in the example below.

GameSparks Document Example

The example below might be used in a “PlayerInventory” collection in GameSparks where you query using { “playerId” : “5c9208b4efe6a104f1c67e24”, “itemId” : 79 }

  "_id": {
    "$oid": "5c9208b4efe6a104f1c67e57"
  "playerID": "5c9208b4efe6a104f1c67e24",
  "itemId": 79,
  "unlocked": false,
  "unique": true,
  "dateAdded": {
    "$date": {
    "$numberLong": "1553074356610"
  "lastUpdated": {
    "$date": {
    "$numberLong": "1553074356610"

To convert these docs to a single object you might use the following data-model.

"playerInventory" : {
  "8": {
    "itemId": 8,
    "unlocked": false,
    "unique": true
  "79": {
    "itemId": 79,
    "unlocked": false,
    "unique": true

This would allow you to replicate the query used in GameSparks to get the item by playerId and itemId by referencing the items by their Ids as playerInventory[itemId].


As with player data, GameSparks MetaCollections make use of the database and return information using noSQL queries. This may not be an option for many alternative platforms as they have their own way of working with static game-data.

These platforms often call their equivalent to MetaCollections something else like title-data, content, game-data, etc, but they are work fundamentally the same; There is some way to add content, usually through the portal or its equivalent (REST API for complex data maybe), and an API client-side or in Cloud-Code to return the data so we can use it. It is also important that this content is cached in some way in order to boost performance.

You will encounter the same issues as you might expect with player-data. If the platform does not allow you to store simple JSON, you will need to create some custom objects so that your data can be serialized by the client.

In cases where the alternative platform offers some kind of SQL database you might have to think about modelling the data to a stricter format than your noSQL collections. This shouldn't be a problem, as MetaCollections are generally static, but you will have to consider how you are going to index fields when converting the data to a SQL table.

Other Features to Consider

What we listed in the section above were only the features we need in order to get similar functionality in the new platform as you are familiar with in GameSparks Cloud-Code. However, as you already know, there are a lot of GameSparks APIs that we didn't cover that may be important to your game.

In some cases, these features will just not be available on the destination platform. Some examples of these cases are outlined below.


System scripts cover a few different use-cases in GameSparks. Most GameSparks developers are familiar with the every-minute, every-hour and every-day scripts as they are commonly used. Other scripts like on-player connect/disconnect and on-publish are also used, however, we will not cover those here as they are specific to GameSparks and will not have a transition route, though they may be reproduced as custom-events or scripts.

For timed scripts (every-minute, etc) there are a few options. Some platforms come with these features themselves so it is only a case of porting the code (given the same complexities as porting any other Cloud-Code, it won't be as simple as copy-paste). In other cases we might have to use external services to run these scripts for us. We cover an example of this in the topic here.


Bulk-jobs are a tricky feature, even for GameSparks. With Bulk-jobs in GameSparks, the server will safely spread out the workload across all players involved and execute the job over time so as not to impact server performance. Bulk-jobs on other platforms therefore should not be just considered as a way just to execute code for every player in the game or a large sub-set of players, you should also consider server performance.

For GameSparks it is not advised to use Bulk-Jobs for every-day events, they are more intended for admin tasks, so if your existing Cloud-Code relies on Bulk-Jobs for every-day operations you should consider optimizing your code rather than trying to port over an inefficient feature.

Oftentimes, an alternative to Bulk-jobs is to trigger the update from a player-action. For example, if you need to inform players of an upcoming event, you can check for this upon login instead of changing something on every player’s account in order to flag their account for that event.

Something you can use in conjunction with the above suggestion is to return a list of active and inactive events when the player logs in. The inactive events come with a timestamp for when they become active and the active events come with a timestamp for when they end. This allows the client to track when updates are needed and double-check with the server when events should take place, therefore reducing the need for bulk-jobs to modify player accounts or send out bulk-messages to players to inform them of changes on the server, which is also a common use-case for bulk-jobs.


Most of these platforms will not have an alternative to the SparkScheduler API, but it may be possible to use something native to the programming language you are using such as SetInterval() for JS and C# or threading for Python.

It is important to remember that if you create your own alternative to these features of the Spark API, you're not replacing everything GameSparks does under-the-hood so it is important to consider the performance impact of these custom features.

Transitioning MetaCollections

As mentioned above, in most cases, these platforms do not have a database you can move your meta-docs into. You will therefore need to work out how to model your data in this destination platform. If possible, consider automating this process through the use of the platform’s native REST APIs rather than copying the data over manually.

GameSparks API Wrappers

When transitioning your GameSparks code consider creating wrappers for certain common functionality between GameSparks and the destination platform.

There will certainly be cases where this will speed things up for you during the transition. Saving data to SparkPlayer and calls to MetaCollections could be replaced with a wrapper API around the destination platform’s alternatives for example. This would make copy/pasting code faster as you would not have to rewrite every call to the database or player object, you could just paste the details into your new wrapper API functions.

Just remember, when approaching this work, that any inefficiencies in your existing code will also be transitioned. So while API wrappers will speed up the transition, they will hide issues that might cause you performance problems later on.

Asynchronous APIs

GameSparks did not use an asynchronous approach for its APIs.

When you execute a request to the database, it will hold up your script and wait for a response before continuing to execute the rest of your code. All of your GameSparks calls are synchronous. This is going to be an issue when it comes to adapting your code for the transition.

Some of these destination platforms do not use synchronous calls and instead use asynchronous calls.

If using asynchronous calls, when you send a request to your database or a 3rd party service (HTTP request for example), your code will not wait for the response before moving on to the next command.

Therefore, in a case like this in GameSparks…

var coinsBal = Spark.getPlayer().getBalance("COINS");
Spark.setScriptData("coins", coinsBal);

You will always have null or undefined returned to the client because the script will not wait for the response from the database.

There are several ways to overcome this problem which will be discussed in the sections below as they are specific to each platform’s programming language and APIs.

Asynchronous and synchronous calls each have their own uses. Regardless of the adaptations you make, don't convert all your database wrappers to synchronous when porting your code. In some cases, like logging and updates, you don't always need a response from the database before continuing with your script. Using asynchronous requests in these cases will speed up the script execution time so consider them whenever you don't need to get a response from the database or cloud-store.

Performance Bottlenecks

It is extremely important to perform an internal review of your Cloud-Code before transitioning to a new service.

Currently inefficient database and player-data flows will also be transitioned to the new platform. Any bottlenecks you currently have in code are likely to be ported along with your code, even if you create a wrapper for your GameSparks code in the new platform.


As with GameSparks, Cloud-Code on these platforms often have limitations you will need to be aware of ahead of time.

These limits can be something you are familiar with from GameSparks, like execution times, but there can also be limits like script size (lines or Kbs), concurrent requests (how many of these requests can be in-process at one time), how many custom requests can you create per game, etc.

These are covered in each of the sections below per platform, so make sure to review any limitations before transitioning your game. Porting the Cloud-Code may be a major undertaking, but it would be worse if the work is all completed only to find the new platform cannot handle your Cloud-Code at load.


Beamable’s alternative to Cloud-Code is called Microservices. Similar to GameSparks, this feature allows you to create scripts which run on the server and allow you to send and receive custom code.

Microservices are very different from GameSparks. In this section we will discuss the key differences and outline any limitations or notable features you need to know before starting your transition.

Cloud-Code: Microservices

The first thing to know about Microservices is that they are not once-off scripts which are triggered by the client, run on the server and then are done until the next time they are called. These Microservices are intended to group together similar functionality into a distinct service.

For example, if you need some custom scripts for serving player profile data, you might combine all those scripts into a “ProfileService” and add to that as you need more custom functionality. You should not be creating a new Microservice for each GameSparks event or module.

Each of these Microservices are deployed as standalone so you access them by referencing the Microservice itself, and then the name of the method you want to call in that Microservice. Beamable takes care of all the connection concerns and makes this very easy for you.

Microservices are written in C# and are created and edited through your Unity editor, which means there is no need for developers to switch between two different languages for frontend and backend development.

Key Concepts

There are some key aspects that make Beamable Microservices different from GameSparks Cloud-Code which we will briefly touch on before proceeding.

Containers & Docker

These Microservices use Docker containers. Containers might be a new concept for GameSparks developers but there isn't anything complicated you need to understand before using them. They are basically a package which contains all the software dependencies and config for your service/app so that it can be deployed anywhere with a single command, rather than needing to install and set up everything manually each time. This makes them lightweight and easy to scale.

Beamable takes care of all the setup and installation when it comes to these Microservices, so you don't even need to know they are there.

When you create a new Microservice script, you can sync it with your server instance, but you can also test and debug it locally. This means that there is no need to have a separate flow for testing through the portal with a test-harness like with GameSparks. Everything is done through the Unity editor and using your own IDE.

Asynchronous Calls

Beamable does not use synchronous calls for their Microservice API like GameSparks does. Therefore we will be using async functions instead. We will point out where this is important for each Microservice call we make, but for the most part you don't need to worry as it is usually a case of adding the “await” keyword to the call and making sure the method is async. Most IDEs will show you errors explaining that you need to handle these async calls which will make it easier to detect where they need to be used.


As mentioned, each of these Microservices should be built to serve a specific purpose, like containing all your player-profile scripts or all your multiplayer scripts, so it is not advised to load multiple libraries and classes to handle a multitude of functionality as you can in GameSparks Cloud-Code. You should aim to keep these Microservices as lightweight and as dedicated as possible.

Having said that, you can do something similar to GameSparks by loading custom or 3rd party libraries into these Microservices.

We cover an example of how to import libraries into Microservices in the section below on Http Requests. Importing custom content classes for use in Microservices is covered in the section below on Referencing Custom Content in Microservices.


Similarly to GameSparks, Beamable Microservices come with their own limitations you need to consider.

There is a limit to the response time of any service call. This is 10 seconds, similar to GameSparks. However, with GameSparks, exceeding this limit results in the script being terminated and returning an error. With a Beamable Microservice call, you will get an error returned, but the script will continue to execute. It is extremely important to remember this as it means you can cause performance issues on your service-instance by letting the code continue to run after the timeout.

Missing Features

In the following sections we are going to go through some examples of how different GameSparks Cloud-Code components are handled in Beamble but we are going to leave out a few key features, so let's briefly talk about those here.


There is currently no way to run something like the GameSparks bulk-jobs using Beamable. However, they do expose their APIs Microservices over REST. It is therefore possible to use an external service to run these jobs over a large number of players but it cannot be done with Beamable’s current tools.


Beamable currently does not have a built-in scheduler API but this feature is on their roadmap.

System Scripts & Every-Minute

There are currently no scripts which fire at set intervals in Beamable. However, since you can hit Beamables APIs over REST or through a Microservice you could utilize something like AWS CloudWatch and Lambdas to trigger specific Microservice scripts. We cover an example of this in another topic here.

Creating and Running Microservices

To create a new Microservice you will need to go to the Unity menu bar at the top of the application.

There you will need to go to Windows -> Beamable -> Utilities -> Microservices -> Create New.

When you have given your Microservice a name, you need to open the Microservice Manager to use it.

You will see some options relating to it which are pretty self-explanatory.

Remember, these aren't just scripts that compile when you save them. They will compile when you make changes just like any other script in Unity, but in order to test those changes locally you need to stop them using the Stop button here, click Build, and then start them again.

If you want to push those changes to your backend you can click on the WRITE ACTUAL MANIFEST button at the top of the Microservices Manager. This will start the process of pushing the services to the backend. This can take quite a while and throughout the process you will see logs in the console. You can also see this progress if you open the portal and go to the Microservices tab.

Note - Beamable are currently planning to change the UI of the Microservice Manager so things might look different for you.

Here you can see all of your Microservices and their current states, along with any deployments you made.

Note - You don't need to push your Microservices to your backend in order to start testing, you can run and connect to them locally.

Executing Microservices

Executing a Microservice is pretty simple. Each Microservice you create has a “Client” class generated for it. This is what we use to execute the service calls.

For example my service called NewMicroservice has a class called NewMicroserviceClient. We can simply instantiate that and call one of the functions you created. Usually, when you create a new Microservice, Beamalbe adds a method called ServerCall() by default, but you can rename that if you want to.

So your Microservice might have a function like…

public async Task<string> ServerCall()
  // This code executes on the server.
  return "hello world";

And your client can call this function like so…

NewMicroserviceClient _msClient = new NewMicroserviceClient();
string resp = await _msClient.ServerCall();

Note - Remember to build your service and restart it after any changes you make to the Microservice script.

MetaCollections: Content

Beamable’s alternative to GameSparks MetaCollections is called Content. However, Content actually covers a number of different Beamable features, not just storing static data.

You can see if you click on the Create dropdown in the Content Manager tab that many other features are listed as Content.

The key concept to understand in relation to transitioning your GameSparks features (not just MetaCollections) is that Content is cached, static definitions for your game’s features. This is more similar to SparkConfig rather than MetaCollections, but because all Content is cached, and we can get Content with easy to use APIs available out of the box, Content is a good fit for replacing MetaCollections.

Example: Converting MetaCollection To Custom-Content

We will go through a quick example of how you might convert your existing GameSparks MetaCollections into a Content structure. For this example, we will use a common use-case where we need to get the base-description of an item. This item might be given to any player using its Id. Therefore, each item in the player’s inventory does not need to have all the details, we just need to know this player has x amount of itemId 10, and we can tell what itemId 10 is from the MetaCollection.

We can’t query Content however, we can only get individual content-objects by Id or by tag.

So an item in the “GameItems” MetaCollection might look like this...

  "_id": {
    "$oid": "591eda42c9e77c00012a6435"
  "itemId": 1,
  "displayName": "Ember 1",
  "categoryId": "weapon",
  "starRating": 1,
  "powerModifier": 0.2,
  "iconId": "weapon_ember_portrait",
  "descriptionId": "ember",
  "modelId": "weapon_ember_full"

There won't be an existing Content definition that covers this case, so we will have to create our own.

You can start this by creating a new C# script in Unity, you can call this something like GameItemContent so that you know it comes from the GameItem MetaCollection structure. It would be helpful later on if you group these scripts together into a common folder. In these examples we call that folder GSContent.

Before we define any variables in our class, we need to change this from a MonoBehaviour to a ContentObject type. This will let us create new instances of this type from the Content manager, and save our GameItems to the manifest.

using Beamable.Common.Content;

public class GameItemContent : ContentObject


If you go back to the Content Manager tab now, you will see your new Content type appear in the menu.

So now all we have to do is add the rest of our fields to the GameItemContent class.

public class GameItemContent : ContentObject
   public int itemId;
   public string displayName;
   public string categoryId;
   public int starRating;
   public float powerModifier;
   public string iconId;
   public string descriptionId;
   public string modelId;

Once those variables are added, you can go back to the Content Manager and add a new GameItem.

Using the inspector you can fill out the details for your item in the inspector tab. Once you have your items filled out, you can click on the Publish button at the top of the Content Manager tab. You will see a popup indicating which content needs to be synced.

If you want to make sure your Content was uploaded you can go to the admin portal. You should see a new category for your new Game Items, with the items you just published.

Referencing Custom Content in Microservices

This is a little complicated but it is a requirement if you want to be able to access custom content from your Microservices script.

If you only want to do validation client-side then this is no problem, you can skip this. However, for more security you might want validation to happen in the Microservice in which case you will need to follow these steps.

To allow the Microservice script to reference your custom content go to your GSContent folder and add a new Assembly Definition. We can call it CustomGSContentDef. You can do this by right-clicking in the folder, going to Create and then Assembly Definition.

You are going to need to add some assembly references to this definition. Make sure to hit the “apply” button before proceeding

Note - It's common for this step to mess with your reference definitions for some of the files referencing the Beamable namespace. If this happens, first try to re-import the files from the folder. You can also restart Unity, which can help solve the problem. If that doesn't work, try removing the using statements in the scripts causing the problem and re-import them again.

Now we are going to add this assembly definition to our Microservice. You can find your Microservice folder at Beamable -> Microservices -> . You will find two files in there, click on the assembly reference file and add a reference for the CustomGSContentDef file we just created.

Remember to hit the “Apply” button to update the file.

Now we have covered Content as a replacement for MetaCollections and how to access custom Content in our Microservices. This process will be used in a number of other topics if you need to see examples. However, we will also repeat this in some examples below to help you understand how they can be used.

HTTP Requests

Beamable does not specifically have a HTTP request API of their own like the GameSparks SparkHTTP API, so in order to send requests we can just the built in C# WebRequest class or you can use a library. Remember that you want to keep these Microservices as lightweight as possible so you should try to avoid importing libraries unnecessarily. Having said that, we are actually going to show how to import a popular JSON serializer so that we can parse the response of our web-request back into a format we can use.

In this example we are going to use a simple 3rd party API which will return a number of randomly generated names when hit. This is going to return a JSON string which we need to convert in order to use it in C#.

We will create a very simple class to model this data.

public class NamesList
   public string[] names;

If you are following from the previous example, you should add this script to the common GS folder so we can assign it to the Microservice as an assembly reference.

Beamable Responses

The structure above might seem confusing to GameSparks developers or even experienced Unity developers, so it is important to cover a few notes about how Beamable parses Microservice responses.

Microservice responses are parsed using the Unity JsonUtility API. This API doesn't support deserializing many common root object types you might want to use. You cannot use dictionaries or lists for example. To get around this, you should use strictly typed objects. Within these objects you can add more complex structures.

This is the reason we are using the NamesList class here and why we need to add the assembly reference for our custom definitions below.

Adding libraries to Microservices is pretty simple. In the folder where your Microservice script is found (Assets -> Beamable -> Microservices) you will find the manifest file for your Microservice. You will need to add the Newtonsoft library dll to the Assembly References section. Remember to hit the “apply” button to make sure it gets updated.

Note - We won't cover importing the package into Unity, but something to note is that in order for the Microservice to be able to build using the Newtonsoft dll you will have to allow that dll to be used on any platform. If you do not do this you will get an error when trying to build the Microservice indicating that the service cannot find the library.

Now you should be able to build your Microservice for the first time without errors.

The code we need here is pretty simple, we will list it step-by-step:

  1. Send the request to the API URL using the C# WebRequest API.
  2. Read the response as a stream and convert it to a JSON string.
  3. Convert the JSON string to a string[].
  4. Add this to our NamesList object
  5. Return to the client
private string apiUrl = "";

public async Task<NamesList> GetNames()
  string respData;
  Debug.Log("Fetching Random Names...");
  // create a new webrequest //
  HttpWebRequest request = (HttpWebRequest)WebRequest.Create(apiUrl);
  // next we read the response as a stream until we get a JSON string //
  using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
  using (Stream stream = response.GetResponseStream())
  using (StreamReader reader = new StreamReader(stream))
     respData = reader.ReadToEnd();
  // we will convert this JSON string to a string array so we can return it  //
  NamesList namesList = new NamesList();
  namesList.names = JsonConvert.DeserializeObject<string[]>(respData);

  // example using the built in JsonUtility which requires a root object //
  // NamesList names = JsonUtility.FromJson<NamesList>("{ \"names\": " +respData+ "}");

  return namesList;

You will need to import the Newtonsoft.Json library and a few other C# libraries for the WebRequest and StreamReader classes, but once that is done, you should be able to build and run your Microservice.

You can also see an example of how to deserialize this data using the JsonUtility Beamable uses internally. This would save you having to import the serializer, but it cannot handle complex data so we thought we would show both examples. With the JsonUtility API you need to provide a root name to this JSON string, which is why the example “names” field is added before parsing.

Now you can call this function from the client using the following code.

HttpTestMSClient _httpTestClient = new HttpTestMSClient();
NamesList resp = await _httpTestClient.GetNames();
for (int i = 0; i < resp.names.Length; i++)

You will be able to see logs from your Microservices and then the names printed out in the console.

Callbacks: Endpoints

Beamable also lets you hit any Microservice function using a HTTP request. This would allow you to integrate a 3rd party callback, such as validating an email, updating a payment receipts, etc.

We have already covered how to create a Microservice, so we will show a very simple example. We will create a Microservice with a simple function which returns a “Hello World” string.

public async Task<string> ServerCall()
  // This code executes on the server.
  return "hello world";

Up to this point, all the examples we showed would operate from the local Microservice or the backend but for this case we need to make sure this Microservice has been synced with the backend.

To do this, click on the “WRITE ACTUAL MANIFEST” button in the Microservices Manager tab.

Here you will see options for deploying all your Microservices. We only need to deploy the “NewMicroservice” one we are testing with, however, selecting only that Microservice actually causes the others to disabled, so keep that in mind.

Deployment can take several minutes throughout which you will see logs in the Unity console indicating progress. If you open up the Microservices dashboard of your portal you should see some updates.

Once you see the service state as “Running” you should be good to go.

Callback URL

To hit this endpoint you need the URL. The URL is structured as below…{CID}.{PID}.micro_{ServiceRoutingName}/{MethodRoutingName}

You can get the cid and pid from your Realm details which can be found by clicking the button in the top left of the portal.

Here you can see some details about your Realm.

You can test this through Postman, or just by sticking the URL in your browser.

Posting Data

Posting data to an endpoint is pretty simple, we just have to use a POST request and give the request a body.

For this example I'm going to use the same Microservice but modified to take a string and return the same string.

public async Task<string> ServerCall(string input)
  // This code executes on the server.
  return input;

The body of your request has to be a specific structure. It needs to contain an array called “payload” which, for this example

Runtime Collections & GDS: Cloud Save

This feature is called Cloud-Save in Beamable and it allows you to store player related information.

It is important to note that this is not exactly the same as GDS or RuntimeCollections in GameSparks as the Beamable Cloud Save feature is not queried. It is instead stored as JSON data and synced to the backend from Unity’s data store (Application.persistantDataPath).

It is therefore more suited to player config or progression data, rather than complicated documents which need to be queried separately. This does not stop you from transitioning your existing data to suit this storage model, but you should consider what data can be exposed and stored on the client and what data should be only exposed through Microservices as this data has to be uploaded/downloaded on the client.

Example: Player Settings

For this example we will take a common GameSparks use-case for RuntimeCollection or GDS. We have a collection called “PlayerAccounts” which has a document structure like this…

  "_id": {
    "$oid": "5cf68431ceedc604e6f67b75"
  "playerId": "5cf6843059fe981049f2c8a7",
  "displayName": "mehface",
  "language": "English",
  "volumeMusic": 50,
  "volumeSFX": 50,
  "notificationsPush": false,
  "notificationInGame": true,
  "autoCompleteWarning": true,
  "playerEmail": {
    "email": "not-registered",
    "status": "unverified"

We know that we don't need the doc Id (_id) and the playerId fields because the Beamable SDK will make sure we are returning only our own user’s data.

We also have two options here. We can decide to maintain this data on the client, in which case we will send and receive updates directly from the client. This might be unsafe in a case where we don't want to expose our player’s progress or inventory for example. Or, we can make this only accessible server-side through a Microservice. Microservices cannot edit Cloud-Data, they can only read the data, so this might be important in your choice to use them.

We won't go into specific details in this example on how to set everything up in the client. There is an example from Beamable here which provides you with a similar example for uploading and downloading the data. All we’ve done in this example is changed the data to model our player settings example and created a constructor so that the player gets the default settings when their game gets set up for the first time.

The first thing we need is the GSPlayerSettings class which will outline the JSON example from above.

public class GSPlayerSettings
   public string displayName;
   public string language;
   public int volumeMusic;
   public int volumeSFX;
   public bool notificationsPush;
   public bool notificationInGame;
   public bool autoCompleteWarning;
   public PlayerEmail playerEmail;

public class PlayerEmail
   public PlayerEmail()
       email = "not-registered";
       status = "unverified";
   public string email;
   public string status;

As mentioned before, it's a good idea to put these custom classes into a common folder so you can share these data models in Microservices using an assembly reference.

From there, we have just modified the example in the tutorial linked above, to download/upload our PlayerSettings example instead of their auto-settings example.

// Download/Upload the current data stored for this player //
await beamableAPI.CloudSavingService.Init();
// get the data from the manifest //
await beamableAPI.CloudSavingService.EnsureRemoteManifest();
// Load settings //
GSPlayerSettings playerSettings = ReloadOrCreatePlayerSettings();

The main area we modified was where the settings were defined for the first time.

playerSettings = new GSPlayerSettings
   displayName = "mehface",
   language = "English",
   volumeMusic = 50,
   volumeSFX = 50,
   notificationsPush = false,
   notificationInGame = true,
   autoCompleteWarning = true,
   playerEmail = new PlayerEmail()

If you run the example you should see the new Cloud Data file in the portal under Cloud Saving.


There is a size limit of 5mb per file on all Cloud-Save data. However, you can have as many files as you want on a player’s account.

Custom Databases

Beamable does not currently offer a database service incorporated into their cluster but they can integrate your own databases on a case-by-case basis.

Stats Service

There is another way to store data for the player in Beamable and that is through the Stats API.

Stats are different from Cloud Save. They are intended for use as analytics stats or KPIs associated with player accounts. They can be very useful for saving information which could be used later for analytics. We will show an example below using stats to save account settings like the last-login and the player registration status.

There is also an example here using Stats the player's level attribute for a levelling system.


Stats are not intended to be used as a replacement for GameSparks JSON docs. Although they can be used to store strings, they should not be used to store player data as JSON strings.


While there are no limitations on the data and number of stats you can apply to a player, it is important to note that Beamable strictly advises against using them to store JSON strings.

Example 1: Daily Login Reward

The first example we are going to go through is intended to show how you can get and set some simple player data (SparkPlayer API), load some static server data (MetaCollections), deliver goods and currency to the player and return their rewards in the response.

There is a little set up needed for this example before we can start working with the Microservice so we will go through these steps first. We will not go into detail on how to set up Virtual Goods or Virtual Currencies because we have already covered those in other topics. Links to these topics are included below.

Virtual Currencies

In a previous topic here we covered how to create some Virtual Currencies. This is a simple process so we won't repeat it here, but we will show some code for crediting these currencies later on. We will use the COINS and GEMS currencies for this example.

Virtual Goods

In a previous topic here we covered how to create some GameSparks style Virtual Goods.

We will make another category for the daily rewards so we can return those too.

Custom Content: Daily Rewards

We are going to have to create some custom content so we can process the rewards. We have already shown an example of how this can be done in the section above on MetaCollections, but we’ll go through it one more time specifically for these rewards.

We are going to have a very simple JSON structure for these rewards…

  "rewards": [
      "rewardType": "currency",
      "currType": "COINS",
      "amount": 250
      "rewardType": "currency",
      "currType": "GEMS",
      "amount": 6
      "rewardType": "vg",
      "code": "daimond_sword",
      "amount": 1

This is the kind of structure that is very simple to work with in GameSparks because we can load it as JSON or store it in a document in a MetaCollection, we could even use properties.

However, with Beamable, we need to make a Content Object out of this so we can work with it as a C# object.

To begin, you can create a C# script. In our case we called this script GSRewardContent. We are going to change this class from a MonoBehaviour to a ContentObject class and give it a content type. We mentioned in the section above about Content and Metacollections that you sometimes need to create an assembly reference to access this content from a Microservice. Refer to that section as you will have to do that for this case so we can check what rewards to deliver in our Microservice server-side.

We are then going to add definitions for our rewards inside the class so that we can create new rewards and configure them.

public class GSRewardContent : ContentObject
   /// <summary>
   /// List of reward to be delivered to the player
   /// </summary>
   public List<RewardDef> rewards;

public class GSReward
   /// <summary>
   /// List of reward to be delivered to the player
   /// </summary>
   public List<RewardDef> rewards;

public class RewardDef
   public enum RewardType
   public RewardType rewardType;
   public string vgCode = "n/a";
   public int amount;

You will notice the GSReward class that we also created. This is for parsing the response back from the Microservice. We can't return something complex like ContentObject so this is a simple version we’ll use so the response can be parsed.

If you save this script and head back into the Content Manager, you should see a new content-type appear.

You can create a new daily reward and add some rewards to it.

The next step will be to create a new Microservice which we will call after player login. Check out our topic here to see how to configure an authentication function. Remember that, where possible, try to group multiple service calls into Microservices instead of just creating one service for each call.

Since we need to add our custom content assembly definition to this microservice, you can go ahead and add that to the Assembly Definition References of your Microservice’s manifest file.

There are more details on how to do this in the section above on Referencing Custom Content in Microservices.

We need to return something meaningful from our post-login server function so that the client can detect what kind rewards have been delivered. Remember in previous sections we mentioned that Beamable needs strict object types in order to parse the response, this is why we created the GSReward class before.

There will be a number of steps to the code we need here:

  1. Load the stat from the player account and validate if it is null or not
  2. Convert the timestamp to a DateTime object
  3. Check If the last login was on a different date to the current date
  4. Deliver rewards, adding each reward to the GSReward object
  5. Update the stat to today’s timestamp
  6. Return the delivered rewards
public async Task<GSReward> PostLogin(long playerId)
  string lastLoginKey = "lastLogin";
  string access = "public";
  Debug.Log($"Checking for daily reward {playerId}...");
  // declare the rewards //
  GSReward rewardsDelivered = new GSReward();
  rewardsDelivered.rewards = new List<RewardDef>();
  // First, we check the player's last login. If this is their first login //
  // we will initialize that stat //
  string statString = await Services.Stats.GetProtectedPlayerStat(playerId, lastLoginKey);
  // We might not have the Stat implemented yet, this is a common case when working from JS //
  // So lets check that here. //
  long lastLogin = 0; // If the stat does not exist, then this will automatically deliver the reward for today //
  if (!string.IsNullOrEmpty(statString))
     lastLogin = Int32.Parse(statString);
  // we will convert this to a DateTime object so we can better use it later //
  DateTime lastLoginDate = UnixTimeStampToDateTime(lastLogin);
  Debug.Log($"Last Login: {string.Format("{0:yyyy-MM-ddTHH:mm:ss.FFFZ}", lastLoginDate)}");
  // Now we can check if the dates match or not //
  if (DateTime.Now.Date != lastLoginDate.Date)
        GSRewardContent dailyRewards = (GSRewardContent) await Services.Content.GetContent("daily_reward.tutorial_example");
        // loop through the rewards and deliver them to the user //
        foreach (RewardDef rewardDef in dailyRewards.rewards)
           if (rewardDef.rewardType == RewardDef.RewardType.VG)
              for (int i = 0; i < rewardDef.amount; i++)
                 Debug.Log($"Granting {rewardDef.rewardType}, {rewardDef.vgCode}");
                 await Services.Inventory.AddItem("items.virtual_goods." + rewardDef.vgCode);
              Debug.Log($"Granting {rewardDef.amount} {rewardDef.rewardType}");
              await Services.Inventory.AddCurrency("currency."+rewardDef.rewardType, rewardDef.amount);
     catch (Exception e)
        return rewardsDelivered;
  // Save the current login date //
  long currTimestamp = ((DateTimeOffset) DateTimeOffset.UtcNow).ToUnixTimeSeconds();
  await Services.Stats.SetProtectedPlayerStat(playerId, lastLoginKey, currTimestamp.ToString());
  return rewardsDelivered;

public static DateTime UnixTimeStampToDateTime(long unixTimeStamp)
  // Unix timestamp is seconds past epoch
  System.DateTime dtDateTime = new DateTime(1970,1,1,0,0,0,0,System.DateTimeKind.Utc);
  dtDateTime = dtDateTime.AddSeconds(unixTimeStamp).ToLocalTime();
  return dtDateTime;

And then you can call this function using the following code…

PlayerServiceClient _playerServiceClient = new PlayerServiceClient();
GSReward resp = await _playerServiceClient.PostLogin(;
if (resp.rewards.Count == 0)
   Debug.Log("No Rewards Delivered...");
foreach (RewardDef reward in resp.rewards)
   if (reward.rewardType == RewardDef.RewardType.VG)
       Debug.Log($"Virtual Good: {reward.vgCode}, {reward.amount}");
       Debug.Log($"Currency: {reward.rewardType.ToString()}, {reward.amount}");

You should be able to see your rewards being delivered from the console.


Before we cover how AccelByte handles feature customization and Cloud-Code alternatives it is important to note that AccelByte is a product, and not a service like GameSparks. The key difference here is that when using AccelByte you are also engaging with the AccelByte team to ensure your game is set up and running correctly.

Rather than setting up an account and using the service through the SDK and portal like GameSparks or other alternatives, AccelByte will help you set up your own instances and create specific environments for your game and help you deploy it.

This does not mean that you cannot use their portal or their out-of-the-box features, it just means that you will need to engage with AccelByte before you can work with their tools.

If you wish to evaluate the product you can request a demo environment from AccelByte.

Throughout your engagement with AccelByte it is important to remember that you are working with the AccelByte team, not just a service provider. AccelByte can therefore assist your team with your transition from GameSparks in a number of ways.

Custom Microservices

If you need custom code or you need your code moved you can engage with the AccelByte team directly and have them create a custom microservice for you. This will be deployed in the environments set up for you alongside the out-of-the-box microservices like Lobby, Leaderboards, etc.

AccelByte will work with your team to make sure the code is ported and optimized to meet their standards for scaling and performance.

Custom Game-Server

Another option for customizing AccelByte is to integrate their backend SDKs into your own service. These SDKs are available in JavaScript and Golang and offer an extended feature-set to AccelByte’s client SDKs. The idea behind these SDKs is that they allow you to create your custom features to your own requirements without the help of AccelByte.

You can then host and deploy them yourself. There is a topic on creating your own backend here which will give you an idea of how this could be accomplished. You would then have to install AccelByte’s SDK on your custom server.


AccelByte offers early access to their cloud-scripting feature called Augment. We will show two examples of Augment in this section which will cover some of its capabilities which you might find useful for testing, however it is not a full replacement for Cloud-Code.

Augment Setup

Augment uses serverless functions to run custom code. These functions are triggered by events within AccelByte or directly using REST calls.

By default, you can configure the function to be triggered when there is a change in:

We can manage these serverless functions in the Admin portal. In the portal choose the Functions option under Functions Configuration as shown below.

There are different programming languages available for these functions. Node.js would be the closest to GameSparks Cloud-Code but you can also use Golang, Java, .NET, Python, Ruby, PHP.

Augment Example: Crediting The Player

The first example we will show is linked to our topics on Virtual Currency here> and Virtual Goods here. It was mentioned in those topics that the AccelByte Unity SDK protects against cheating by not granting access to the client to credit/debit currency or grant items to players. Instead this is done via another service. In this example we are going to use Augment to do this, however, it should be noted that Augment should really only be used this way for testing or in conjunction with short-term events. As mentioned above, Augment is not a Cloud-Code solution, it is designed to incorporate temporary features into your AccelByte instance.

Augment Script Setup

First, we need to create a new Augment function. You can do this through the Admin portal by going to the “Functions” category under the “Functions Configure” section as shown below.

You will then need to click on the “Create” option right side of the window. This will bring up a small window to configure the function.

For this example we only need the function-name and handler fields to begin with but you can see details of all other fields here.

The “Handler” field is like the shortCode field in GameSparks. This is the name you use when executing the function.

In this example, we want to expose our function to using REST for testing so choose the option “Expose function URL”. We do not need to set a trigger-type for this example.

We can supply Client credentials in the environment variables field. Client credentials could be any public or private client credentials AccelByte provided us in an earlier stage.

These would be the same credentials which the Unity client is using to access AccelByte services. If you do not know your credentials, you can find them by going to Platform Configurations in the top right-hand menu and then “Outh Clients” as shown below.

We can enter the credentials in environment variables as shown below.

We are going to choose Python for this script example.

We are going to add some very simple code which will update a user’s wallet with the desired amount based on their playerId. You need to supply your namespace in the field below.

from justice import Justice

def credit_wallet(event, context):
    namespace = '<your-namespace>'
    endpoint = ""
    core = Justice(namespace, endpoint)
    user_data = event['data']
    r =['userId'], user_data['amount'], 'COIN')
    return event['data']

We need to include dependencies under the Dependency Code. In this example, we would include something like...


Calling a Function Via REST

Once the function is created successfully we can use this endpoint to update the user’s wallet. We can make use of auto generated FunctionURL which would appear in the actual function’s Configuration Details panel after creation.

Below is the window after the function is successfully created in the portal. You can see the FunctionURL in the highlighted section.

To test this we can get one of our players and check if they have a wallet setup already as before a player gets credited or debited, they will have no wallet on their account.

Below is an example of one of our player’s accounts using AccelByte’s player-manager screen. We can see there is no wallet for this player at the moment.

Note: If the player does not have a wallet, hitting the above endpoint would actually create a wallet and credit the desired amount.

We can hit this endpoint with the below CURL command in any WEB API tools. In this example, we have used Postman. Below, we need to include the Function URL and our player’s userId.

curl --location --request POST
'<your-function-id>' \
--header 'Content-Type: application/json' \
--data-raw '{
"userId": "<your-userId>",
"amount": 200

We can import the above curl command in Postman and send the request. We can see the response below.

Checking on our user again, we can see the wallet of the user has been credited with 200 Coins


This is not how you credit and debit players using AccelByte!

This is only for your own testing and evaluation purposes. As mentioned in the introduction to this section, you should use a custom microservice or a custom backend server for this sort of functionality.

Augment Example: Login Rewards

This example is going to be tied into the userAuthentication event we mentioned earlier so that it will run every time the player logs in.

We are going to configure the code so that it will check if the player has already logged in that day or not. If the player has not logged in already that day, we will give them a simple reward of a few GEMS.

Augment scripts are only supposed to be used as temporary custom-code, so we will also use this example to show an appropriate use-case for this.

Below is the code example to achieve daily login rewards. The “namespace” parameter would be your game’s namespace (as mentioned in the previous section), and we will create a collection called daily-login-test in built-in MongoDB.

The result is that everytime we hit this endpoint it will extract the lastlogin records from that collection.

import time

from datastore import MongoDB
from justice import Justice

def daily_login(event,context):
    namespace = '<your-namespace>'
    endpoint =""
    core = Justice(namespace,endpoint)
    user_data = event['data']
    augment_mongoclient = MongoDB()
    collection_name ="daily-login-test"
    now = time.time()
    day_seconds = 60 * 60 * 24
        mongo_query = { " userId": user_data['userId'] }
        get_data = augment_mongoclient.builtin_db[collection_name].find_one(mongo_query)
    except Exception as e:
        return str(e)

    if get_data is None or (now - get_data['lastlogin'] >= day_seconds):
        r =['userId'],100,'GEMS')
            data ={ "userId": user_data['userId'], "lastlogin":now }
        except Exception as e:
            return str(e)
        return "not a day has passed yet"

    return "100 GEMS has been given to: " + user_data['userId']

We can hit this endpoint with auto generated Function URL after function creation in the Admin portal.

Initially, We will try to grant 100 Gems if the user is logged in first. Below is the response we have received which indicates that the user wallet is updated.

We can see the user’s wallet has been updated with 100 Gems in the admin portal.

As we discussed earlier in the example, the user should not be granted Gems after first login and we should receive an error message if we hit this endpoint within 24 hours. For demonstration purposes, Lets hit the above endpoint again and below is the error response received.

We have shown the daily-login rewards example through a REST API, however, this is not really much use to us other than testing, as we want this to happen automatically upon authentication and not because we’ve hit it via REST.

As mentioned, we can trigger this function to be triggered when the user authenticates using AccelByte Kafka triggers.

To do this we need to select the kafka-trigger option for the function and then for the “Trigger Type” field, select userAuthentication for the Topic category.

This function will then be triggered every time the user logs in and grant Gems only for the first time in a day.


Cloud-Code with Nakama is a very large topic as Nakama offers many of the same features as GameSparks does, such as being able to create custom events, access the database and write custom code which can be applied to hooks like on-authentication or after IAP purchase validation.

However, Nakama’s approach to Cloud-Code is very different from GameSparks. Where GameSparks allows you to write your code from the GameSparks portal IDE, in Nakama you develop your code locally and push it to your server instance.

In this topic we will take a look at how to replicate your GameSparks code within the Nakama runtime server.

Server Setup

The first thing you need to do is set up a new server runtime.

Nakama offers 3 languages you can choose to develop your server code in: Golang, Lua and TypeScript.

For this example we are going to use TypeScript as it may be more familiar to you and your team due to your existing GameSparks code being currently in JavaScript. Using TypeScript will also mean that your code will be more portable and therefore save you some time.

It is important to note that TypeScript is not the same as JavaScript so there will be some constraints to consider when porting your code. You may need to find workarounds or rewrite some of your project architecture.

Note - Nakama recommends Golang over the other options as the runtime environment is built on Golang. You therefore get more features and efficiency by running your server on the native environment. Consider porting your code to Golang to take advantage of this.

We won't go through every step for setting up this server as much of it is already covered on the Nakama documentation site. Make sure you have Docker-Hub installed and running before you start, then you can follow the guide available here for the TS server runtime.


Using the flow from the guide linked above you should have your server setup and running.

You can always double-check that your instance is running locally by going to the address which should show you your Developer Console.

If you have followed the steps in the setup tutorial linked above, your main.ts script should look something like below.

* This is our main function and entry point to the server when it starts
* @param ctx           - server information & environments
* @param logger        - used to add logging to the server logs
* @param nk            - Nakama related functions
* @param initializer   - used to assign RPCs and hooks
let InitModule: nkruntime.InitModule = function(ctx: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, initializer: nkruntime.Initializer) {"Hello World!");

This main function is the entry point to your server so all your initialization code will go here.

To launch you server locally, follow these steps:

You should start seeing logs from Nakama in your terminal indicating that your server is being deployed.

The final log should show the message “Startup done” but somewhere in the middle of those logs you will see your “Hello World” message too.

You can think of the InitModule function like the GameSparks OnPublished script which you may be familiar with and runs only when a new snapshot is published to the live environment.

Therefore, any code you put in the InitModule function will run when your server boots, including anytime you redeploy the server instance.

Creating RPCs/Events

For the next example let us take a look at how we can create some custom events. In Nakama, these are called RPC functions.

We will start by creating a new TypeScript file in our project. All TypeScript files are imported as modules so any parameters or functions we put into them are accessible everywhere in the project. For porting our GameSparks code this means that we don't need to use the require() statement to import modules.

For our example we called this script “exampleModule.ts”.

Before we can start creating our RPC function we need to list this file in the tsconfig.json file.

 "files": [
 "compilerOptions": {
   "typeRoots": [
   "outFile": "./build/index.js",
   "target": "es5",
   "strict": true,
   "esModuleInterop": true,
   "skipLibCheck": true,
   "forceConsistentCasingInFileNames": true

Now we can create the RPC in the exampleModule script. We are going to do something very simple for this example; we will create a function that will take a userId and return some basic information about that user.

* Returns the player's basic details
* @param context
* @param logger
* @param nk
* @param payloadString
function rpc_getPlayerDetails(context: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, payloadString: string): string {"Data In: " + payloadString);
   const payloadData: any = JSON.parse(payloadString);
   let userId = payloadData.userId;
   let playerData: any = {};
   // Get the player's account details //
   let playerAccount: nkruntime.Account = nk.accountGetId(userId);
   // construct the response data //
   playerData.userName = playerAccount.user.username;
   playerData.displayName = playerAccount.user.displayName;
   playerData.userId = playerAccount.user.userId;
   // stringify the data before returning it //
   return JSON.stringify(playerData);

If we break down what this function is doing, you can see that we are first converting the payload string to JSON. You will need to do this anytime you send data to an RPC function.

We are then loading the player’s account. This would be something similar to Spark.loadPlayer() in GameSparks. We take the fields we need from the account to create a custom object.

Finally, we return the data from the RPC function which will return that payload to the client. You will notice that we parse the data back to a string before returning it. This is because all data returned from an RPC function needs to be a string.

Now we need to register this RPC request function in the main.ts script.

let InitModule: nkruntime.InitModule = function (ctx: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, initializer: nkruntime.Initializer) {

   initializer.registerRpc("GetPlayerDetails", rpc_getPlayerDetails);"hello world");

Reminder - Remember to rebuild and redeploy your server before continuing to test this example.

If you log into the Developer Console you should see your new RPC function appear in the Runtime Modules page.

Testing RPCs

Nakama does not have a Test-Harness like GameSparks does but you can still test RPC functions from the Developer Console’s API Explorer page.

Testing From Unity

Calling an RPC function to Nakama from Unity is very simple. We need to know the name of the RPC function and we need to give the RPC function a custom JSON payload using a C# Dictionary.

Below is an example from the Leaderboard topic here showing how you can post data to a custom Leaderboard RPC.

/// <summary>
/// Post a score to the given LB for the current player
/// </summary>
/// <param name="session"></param>
/// <param name="lbID"></param>
/// <param name="score"></param>
private async void PostScoreCustom(ISession session, string lbId, int score)
   string rpcId = "postScoreCustom";
   Dictionary<string, object> requestPayload = new Dictionary<string, object>()
       { "lbId" , lbId },
       { "score" , score }
   var payload = Nakama.TinyJson.JsonWriter.ToJson(requestPayload);
   IApiRpc responseData = await nakamaClient.RpcAsync(session, rpcId, payload);

RPC Context

In the example we just covered we were showing how you can send in a userId and get that user’s details back. As with GameSparks you could also get back the details of the user that called the RPC. You can get this information from the RPC context.

   let userId: string = payloadData.userId;
   let playerAccount: nkruntime.Account;
   // if there is no userId, then we can get the current user's Id //
   if(!userId){"Fetching current user's data...");
       playerAccount = nk.accountGetId(context.userId);

   }else{`Fetching data for user [${userId}]...`);
       playerAccount = nk.accountGetId(userId);

For more information on context parameters check out this guide here.

Request & Response Scripts

Similar to GameSparks, Nakama has hooks which allow you to run code before or after common out-of-the-box server events.

An example frequently used by GameSparks developers is to run some code after registration which will give the player some starting currency or items. Check out our topic on Virtual Currency to see an example of how this can be done.

You can see more details on how to register these hooks here. There is also another type of hook you can use in Nakama which is called Messages. They work similarly to the before and after (request and response) hooks, but they are used for notifications such as chat or matchmaking.

You register these in the InitModule function as you would the other hooks and RPC, however, you need to know the name of the message if you want to register it. For example, if you want some custom code to run when you send a chat message you would use the “ChannelMessageSend” name.

initializer.registerRtBefore("ChannelMessageSend", onChannelMessageSend);

Working With TypeScript

Something to point out if you are not familiar with TypeScript is that you are required to maintain strict typing throughout your code. This might make porting GameSparks JavaScript code tricky as everything is treated like an object in JS.

The main thing to keep in mind is the response type of your functions and whether or not they can return null or undefined types.

let myString: string | null = returnString();
function returnString(): string | null {
   return "hello-world";

Any data that can be modeled should use an interface outlining that data. For example, in the GetPlayerDetails RPC example we could create an interface something like this.

interface PlayerData {
   userName: string,
   displayName: string,
   userId: string

We can then use its interface instead of a JSON object.

let playerData: PlayerData = {
   userName:       playerAccount.user.username,
   displayName:    playerAccount.user.displayName,
   userId:         playerAccount.user.userId

However, there are cases where strict typing will not work for you. The flexibility of using JS objects in GameSparks means that you could have cases where only JSON objects will work.

An example of this is where you have a function which could return different types of data depending on what is needed. For these cases you can use the any type. This might get you unblocked in many cases while transitioning your code but you should aim to apply strict typing as much as possible.

function returnAny() : any {
   return {
       some: "messy",
       json: "stuff"


We’ve already talked about how to build and deploy your local Nakama instance. But you will eventually have to deploy somewhere else so that other developers and players can access the instance.

Because Nakama uses docker you are free to choose where you deploy your instance, however, for this topic we will focus on the deployment options Nakama provides called the Heroic Cloud.

Heroic Cloud

Heroic Cloud is a hosting platform that Heroic Labs provides for Nakama. Deploying on the Heroic Cloud means that they take care of all the infrastructure for you and you can focus on development.

You can see a link to the Heroic Cloud from the Developer Portal of your local instance.

Using the Heroic Cloud you can manage and monitor your server, change configuration settings and you can also hook up your server to a repository so that you can update and redeploy your live instance.


Builders in your Heroic Cloud account allow you to easily create new server-images and deploy them to your projects. Builders are easy to set up. There is a guide here on how to create a builder and link it to your repo.

Something important to note is that the folder layout for TypeScript is different than for Golang and Lua. Instead of using the root folder of your server runtime project, you will need the compiled JavaScript index file. This is usually found in the “build” folder of your project.

This is compiled when you build your project, but you can also trigger it to recompile by using the command “npx tsc” in the terminal.

There is one more step you will need in order for your configuration settings to be updated from your repo while using TypeScript. Your config.yml file needs to be renamed to the same name as your project.

For this example, the project was called “gstransitiontutorials” so the config file is called “gstransitiontutorials.yml”.

The repo folder therefore looks like this…

Once you have created a new builder image, you can deploy that image by going to your project, clicking on the Configuration tab and selecting an image to update from. This will trigger a redeployment so the service may be offline for a couple of minutes.

You can check that your server is ready by checking out the logs in the Logs tab.

Transitioning GameSparks Configuration

Once you have your server structure set up, the next thing to think about is how to get your GameSparks configuration into Nakama. In this section we are going to look at a quick and easy way to transition this data.

By GameSparks configuration here, we are talking about your Leaderboards, Virtual Goods, Achievements, etc. These could be transitioned by hand, one by one, or they could be synced via REST automatically. Something to note is that Nakama does not have all of these features out-of-the-box with exactly the same functionality as GameSparks so there will be some custom code required.

We are going to take a look at something in-between both of these approaches (manual and automatic).

You can find the configuration details for all your GameSparks features using the REST API here. For this example, we are going to look at transitioning Achievements.

From the Swagger page you need to fill in your authentication details at the top of the page and then select the Achievements configuration. We are going to use the “GET” method.

You will need to supply the API Key for your game and then hit “Try it out”.

The response should be an array of JSON data containing all your configured Achievements.

We are going to take this array and copy it into a new script in your Nakama runtime server.

const gsData_Achievements: any = [
       "@id": "/~achievements/didTheCoolThing",
       "currencyAwards": {
           "gems": 5,
           "xp": 333
       "description": "didTheCoolThing",
       "leaderboard": null,
       "name": "didTheCoolThing",
       "propertySet": null,
       "repeatable": false,
       "segmentData": [],
       "shortCode": "didTheCoolThing",
       "virtualGoodAward": {
           "@ref": "/~virtualGoods/diamond_sword"
       "~triggers": []

Next, you will need to store this config data somewhere so you can reference it from other scripts.

Our function for loading data is straightforward. It is going to use the Nakama Storage Engine to save each config document as an object. You will then be able to get those config objects in your scripts using the object “key”, which in this case will be the shortCode of the Achievement.

This process will allow you to write your own wrapper module which will be able to return the relevant configuration object based on its short-code, just as you are familiar with in GameSparks.

This process will allow you to write your own wrapper module which will be able to return the relevant configuration object based on its short-code, just as you are familiar with in GameSparks.

* Loads GameSparks config data into the Nakama storage engine
* @param nk {nkruntime.Nakama}
* @param logger {nkruntime.Logger}
* @param gsDataType {string}
* @param gsDataList {object} a JSON array containing GS config data
function loadGSDataToNakamaStorage(logger: nkruntime.Logger, nk: nkruntime.Nakama, gsDataType: string, gsDataList: any) {`Loading GS Config Data [${gsDataType}] to Nakama Storage...`);
   // We will go through the gsData and convert each object into a format that can be stored in the //
   // Nakama storage engine //
   var storageList = (gsDataObj: any) {`Loading ${gsDataType}, Id: ${gsDataObj.shortCode}`);
       return {
           collection: "GSData_" + gsDataType,
           key: gsDataObj.shortCode,
           userId: "00000000-0000-0000-0000-000000000000",
           value: gsDataObj,
           permissionRead: 1,
           permissionWrite: 0
   nk.storageWrite(storageList);`${gsDataType}'s uploaded to Nakama Storage...`);

As you can see, we are using a map function to convert the GameSparks config data array into a form that the Nakama storage engine requires.

We will create a collection based on the GameSparks config type. The key is the object short-code as already mentioned. The user ID is the default user. This is important because we must have a user ID associated with objects in the storage engine and we also want to control access to this object so that it is only accessible from runtime server scripts.

The permissions also help with this. permissionRead (1) allows only the owner to read the object. permissionWrite (0) allows no one to write/update the doc once it is created.

The overall plan is to design something similar to how MetaCollections work on GameSparks. This would also be a suitable method for transitioning your MetaCollections provided you aren't using complex queries. Property Sets would also be suitable for this method.

Now we can test this code using your RPC function.

Reminder - Remember to declare the RPC in the InitModule function of the main.ts script and rebuild and redeploy the server.

function test_syncAchievements(context: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, payloadString: string) {
   loadGSDataToNakamaStorage(logger, nk, "Achievement", gsData_Achievements);

We can check that this object was uploaded by going to the Developer Console and clicking on the “Storage” option on the left-hand side menu.

You will be able to see your GameSparks configuration objects listed here. If you click on one of your objects you can inspect the data to ensure it was uploaded correctly.

Reading Storage Data

The next thing we need to cover is reading this storage data back into your scripts. This is also covered in the links above, but for this example you can see how easy it is from the code example.

var achConfigList: any[] = [];
let achCursor: nkruntime.StorageObject[] = nk.storageRead([{
  collection: "GSData_Achievement",
  key: "didTheCoolThing",
  userId: "00000000-0000-0000-0000-000000000000"
if (achCursor.length > 0) {
  achCursor.forEach(gsDoc => {

The method above will work for any kind of custom collections you have in your current GameSparks implementation providing you are using queries such as { playerId, itemId } or just playerId to get these objects back.

If you require more complex queries then you can check out Nakama’s SQL API which allows you to make direct calls to the database.

However, we would encourage you to attempt to convert your calls to use the Storage Engine as this will keep your runtime server operating as efficiently as possible.

HTTP Requests

Let us take a look at how to use HTTP requests to reach out to other services using Nakama.

The example we will show here would be how we can check the contents of a chat message and stop it from being sent if any profanity is found.

We will use for this as it has a simple REST API. We can send our message string to the REST endpoint and it will return a version of our message with any profanities replaced with asterisks. This allows you to send back the censored version of the message or compare it to the original message and make a decision to block the message or not.

* This hook is raised whenever a message is sent to a channel
* @param context
* @param logger
* @param nk
* @param envelope - message payload //envelope.channelMessageSend.content//
* @returns {Envelope | void}
function onChannelMessageSend(context: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, envelope: any): nkruntime.Envelope {
   var messageData = JSON.parse(envelope.channelMessageSend.content);`Validating message [${messageData.message}]`);

   let method: nkruntime.RequestMethod = 'get';
   let basURL: string = '';
   let getURL: string = basURL + encodeURIComponent(messageData.message);
   let response : nkruntime.HttpResponse = nk.httpRequest(getURL, method);
   // Throw and error or send message if the message was okay //
   let isValid: boolean = (response.body === messageData.message);
   if (isValid) {
       // pass on the envelope to allow the message through //"Validation check passed...");
       return envelope;
   } else {
       // throw an error to stop the message from being sent //"Validation check failed...");
       throw JSON.stringify({
           "code": 123,
           "error": "validation-failed"

You can also set a body and headers for these HTTP requests. For more details on how to do that check out the documentation here.

Callbacks: Endpoints

As with GameSparks, you can also create scripts which can be hit via HTTP requests. These scripts are the same as the regular RPC functions we created in this topic. Let's take a look at a simple “echo” function that will return whatever we send to it.

function rpc_Echo(context: nkruntime.Context, logger: nkruntime.Logger, nk: nkruntime.Nakama, payloadString: string): string {"Data In: " + payloadString);
   return payloadString;

Remember we have to register this RPC in the InitModule function.

initializer.registerRpc("echo", rpc_Echo);

To hit this RPC via REST we need the http key. You can find this in the Configuration settings list in the Developer Console. For this example we are still using the default key but we will explain how to change these settings in the next section.

If you are testing your local instance the URL will look like this…

You will then need to pass a JSON string in for any data you wish to place in the body of the request.

We can use Postman to test this.

For more details on server-to-server calls from Nakama, check out the documentation here.

Configuration Settings

As with GameSparks, there are some configuration settings that you need to apply to the server in order to get features like social authentication and IAP working. In Nakama however, there are quite a few more settings that you can set so let us take a look at how we can set these.

Note - There are more details and a full list of these settings available here.

Let's first take a look at the simple way we can apply these settings through the Heroic Labs portal.

First, select your project from the Heroic Cloud portal.

Next you need to click on the Configuration tab. Here you will see a section with a large list of configuration settings. These are the settings we will be updating to get our social authentication working.

Once you have changed one of these settings make sure to save them using the button at the bottom of the list. Updating any of these settings will require your cluster to be redeployed before the new settings are updated. Remember that triggering a redeployment mean you will lose access to your Developer Console temporarily.

Although this is simple, it does require you to be registered with the Heroic Cloud and have an instance deployed with them so let us take a look at how to do this with a custom deployment.

To do this we need to modify the runtime environment’s config.yaml file directly. We will be following the guide here on how to do this but remember, this example is only applicable to the local instance of the server. If you want to test this in a prod environment you will have to remember to set this up for your prod server too.

Using the bare-minimum configuration our new config file looks something like this.

name: nakama-node-1
data_dir: "./data/"

  stdout: false
  level: "warn"
  file: "/logfile.log"

  port: 7351
  username: "user"
  password: "password1"

  app_id: '<app-id-here>'
  bundle_id: ‘’

Now you will need to restart your runtime server in order for your server to be updated from your config file. You can check that the change has been applied by going to your Developer Console and clicking on the Configuration tab.

Note on TypeScript Environments

We mentioned in the section on Deploying that there is a special flow for deploying builds to TypeScript on the Heroic Cloud. Check that section out again to see how you can update settings from a yml file if you are hosting on the Heroic Cloud. For self-hosting options the above method will work.

Missing Features

Although Cloud-Code with Nakama’s runtime environments is very flexible, there are some components that you will not be able to transition.


Bulk Jobs are not supported by Nakama. However, the Nakama team will work with you to see if they can help you achieve what you need by some other means.


Schedulers are not an out-of-the-box feature of Nakama however, it is possible to recreate this feature in Golang but not with TypeScript.

Every-Minute, Every-Day, Every Hour Scripts

Nakama does not provide these scripts out-of-the-box for their runtime environments. It would be possible to recreate this functionality using the Golang environment as you could make these timed events run based on the current timestamp. This is not possible in TypeScript however.

Another alternative would be to use a Cron-Job using AWS CloudWatch. There is a tutorial on how to achieve this here.

Example : Daily Login Reward

For some of the other platforms we took a look at throughout these transition guides we would wrap up the Cloud-Code section with a simple example showing how we can piece all these components together to create a daily reward system. In this case Nakama already has a daily reward example on their site along with an accompanying video. Check that tutorial out here for more information.


brainCloud’s Cloud Code system is very similar to the model employed by GameSparks: In brainCloud:

brainCloud Data

brainCloud offers a set of data APIs for storing player and game data. An overview of the different data APIs can be found here – as well as a discussion of the benefits of Custom Entities vs. the older User and Global Entity mechanisms here.


Developers should use brainCloud’s Unowned Custom Entities in place of MetaCollections. Custom Entities are JSON objects with a API for storage and retrieval. For more information, see the Custom Entity API reference.


brainCloud provides a mechanism called API Hooks that allows you to attach custom scripts to API Calls and other events in brainCloud.

API Hooks can be configured as either Pre- or Post- hooks – meaning that they will run either before or after the API calls they are enhancing.

Post-hooks are useful for cases where you want to enhance the results of an API call with additional data – for example, attaching the user’s campaign state to the Authenticate call results.

Pre-hooks can be used to add additional checks to allow/disallow API calls – and alternatively re-route the calls themselves. They are an easy way to re-route cheaters to alternate leaderboards, for example.


brainCloud offers a Batch User Script feature for scheduling a script to be run all or a subset of users.

Like GameSparks, the workload is spread across all players and executed as background jobs to not impact server performance. brainCloud also offers the ability to trigger a completion script when all users have been processed.

For more information, check out the RunBatchUserScriptAndCompletionScript() API call.


brainCloud offers APIs for scheduling a script to run at a specified time in the future. Although not the same as the every-hour, every-day scripts supported by GameSparks – the same sort of functionality can be achieved by having the script re-schedule itself to run again. An example of such a script can be found in brainCloud’s Cloud Code Central repository.

Transitioning MetaCollections

The brainCloud Portal offers the ability to import data from JSON files – this is supported for both Custom Entities and Global Entities. This is suitable when important static reference data – like level data, tuning files, etc.

Note - The allowed size of import files is limited – if you receive an error during the import, brainCloud support may be able to adjust the limit for you.

Note that brainCloud also supports an S2S API, which may be helpful if you need a more custom approach for transitioning your app’s reference data.

GameSparks API Wrappers

If you decide to create wrappers for GameSparks key APIs, you can easily include them in your scripts using the bridge.include() operation. Using this approach may allow you to better take advantage of the similarities between the brainCloud and GameSparks cloud code systems.

Performance Bottlenecks

The brainCloud Cloud Code Editor and API Usage pages can help you to find performance bottlenecks: