This guide outlines the different choices for moving player information from GameSparks to another service. The intention of this guide is to provide information to your team to help them make a decision on which transition path suits your game.
A direct transition involves replicating the entire database from the old service to the new service in one large step.
This is usually performed with the aid of an ETL (Extract-Transform-Load) tool, which can be run from your desktop or on the cloud. These tools are usually inexpensive and some of them are free. It would be a one-off cost anyway, as the transition would not be needed to be performed again in future. They work by connecting to the source database (a GameSparks endpoint or database snapshot) and transferring documents to the destination data-base (destination platform’s database endpoint).
It can take quite a while for the process to complete, a few hours to a day depending on how many records need to be moved. Often you can use these tools to select specific records you want to extract in order to speed this process up.
This approach ensures that all docs are transferred, but it takes time, and while docs are being transferred, it is recommended that the app is shut down so that new players cannot join while the ETL is in progress. You also don't want players to change their record after the extraction has been performed. In this case their moved records will be out of date.
When the transition is complete we would test a version of the app connected only to the new service. If those tests look stable then the new app would be released.
Because the app has to be taken down for a period of time, direct transitions can be riskier.
There is often not as much time available for testing between the app being taken down, the ETL being performed, and the app going live on the new system.
The advantage is that you know all your players' data is safe from that point on. You can also create a backup database before going live, so that if anything goes wrong when you relaunch the app, you can always take the app offline and restore the DB from the backup to test again and relaunch successfully.
There are a lot of different ETL tools available but it is important to note that they are mostly built to move data from one database to another database of the same type. There are tools available that can convert a noSQL (which is what you have in GameSparks) to a SQL database but they are rarer and usually more specialized and expensive.
You may also have to keep in mind that some of your data is likely using MongoDB (for your GameSparks meta and player collections) and some of it might be using DynamoDb (for GDS collections). Remember that your destination platform might not support a noSQL database, and in a lot of cases going for a SQL database will be a better option for some of your collections. It is important to assess this with the destination platform before starting the transition.
You can see a guide on some of these ETL tools here.
Passive transitions work by running the old service (GameSparks) and the new service in parallel for as long as is needed.
In this case the app will have two SDKs installed on it, one for each platform.
When a new player comes to the app, they will register only with the new service, and not GameSparks.
When an existing GameSparks player comes to the app, they will first log in with GameSparks so we can confirm their playerId. The player will be registered with the new service and their GameSparks username and password. During or post registration the GameSparks player Id will be sent from the new service to a custom endpoint on your GameSparks instance. In that endpoint script you can use their playerId to extract all the necessary data and send it back to the new service to be applied. This account will be marked as “transitioned” in GameSparks to make sure the player’s account cannot be transitioned again. You can also use this “transitioned” flag and to block the player from authenticating after this point.
With this process your most active players will slowly be transitioned for as long as GameSparks is still available, after which point, you can remove the GS SDK from the app and re-release the app to be dependent only on the new service.
This process will only move the most active players. The amount of players you can move depends on the duration you can leave GameSparks running so there is a time limit and a limit to the number of total accounts you can move.
Although this is the less risky transition option, it means that old accounts are less likely to be moved. In some cases this is no problem but in other cases where users will have paid for Virtual Goods, DLC, etc this is a significant risk. It is ultimately the choice of the team as a whole.
Alternatives & Caveats
The approach above can work both ways, depending on the capability of the destination platform. You can have the user log into GameSparks and use GameSparks to hit an endpoint on the destination platform. This requires the destination platform to have similar Cloud-Code capabilities as GameSparks.
Some platforms with Cloud-Code dont allow you to send custom data along with authentication or registration like GameSparks does. In these cases you need a custom request to move the player and there is nothing stopping the player from hacking the game and putting any player Id in place of their own.
One way to secure this process is to have GameSparks return a unique token. This could be a random string or maybe a hash against a secret. Sending this, along with the player in your custom-made player-data sync request would ensure that the player cannot be moved unless authenticated by a user by GameSparks before starting the transition.
Something that might interfere with this process depending on your GameSparks implementation is that GameSparks Cloud-Code has a 30sec execution limit. In most cases, this should be plenty of time to transition, but anywhere there is data to aggregate or transform before sending back, this could be an issue. Especially if your game is already struggling to perform. Remember that you want to perform this extraction on the live environment, so any drop in performance would affect players. This shouldn't be too much of a concern as you will be extracting players one-by-one, rather than in a batch.
One way to get over this is to use an AWS Lambda function to extract the data over REST. This would be triggered from the destination platform and would continue to run until all the data is gathered and then return that to the designation platform either all at the end, or in batches.
Lambdas can be expensive to leave running for long periods of time, but since there should only be one of these per moving player, and the majority of your players won't take longer than 30 seconds, it is a viable alternative.
What if the destination platform has no Cloud-Code alternative?
In this case you have limited options. One way of doing this would be to call a Lambda function which will transfer the data server-to-server for your player. This would likely mean re-authenticating once the Lambda is finished but this wouldn't be very difficult.
Another way would be to do something similar to what is suggested in the previous section but hit the GameSparks endpoint directly from the client, get the data and serialize it and send it on to the new platform.
This could be a good middle-ground if you aren't concerned about hacking or security but it is a very insecure approach.
Best of Both?
There are cases where the developers cannot risk shutting down the service, but also need to be sure that all their player data is moved, usually because of goods purchased as described above.
In these cases, it is possible to start with a passive transition. You release the new version of your app which runs both the SDKs in parallel but at some point in the future close to the shut-off date (maybe 1 week before) you start the ETL. This ETL should be sorted to start with the oldest records first. This will be the big ETL which takes the most time.
This can be extracted to a backup of your database or to the main destination database, it doesn't matter, because once the transition is done you have all your records safe in case the switchover fails.
Before you want to switch the GameSparks component off, you do a much shorter ETL, including only the players who have logged in since the last ETL. You should be able to do this easily because GameSparks does give you access to the player collection over REST, and the “lastSeen” field in those documents is updated each time the player logs in.
Since this is a much smaller set of records than the main database, it should take a much shorter period of time than the bulk ETL job.
This approach is much more complicated but it has the advantage of little-to-no downtime. Another advantage is that once all players have been transitioned to the backup database it can become a permanent reference for all player data. From there you can also export it for BI purposes in future, or into cloud-storage to keep costs down.
With this backup, if a player logs in after the SDK has been removed or the GameSparks service turned off and still wants to recover their account, you can create a feature to put them into an “account recovery” state from which you can still use the backup database to move them. This account recovery process will be much slower than normal auth or registration, but it will only be for those old players which will be only occasional and the process will only be once per player.
Custom ETL Tool
In a case where you cannot get access to your database information from an endpoint or the data is available through some other mechanism (let's say from an S3 bucket), a custom ETL is going to be needed.
An ETL is really just an interface between a source database and a destination database. Most of these tools are developed using the database APIs for convenience, but in this case you’ll have to create your own APIs. In some cases, like if your data is in an S3 bucket, there will be REST APIs for that service, so you don't have to create your own APIs, but you will be writing your own request to that service’s APIs. The important thing is that the tool needs to be able to make efficient use of the service’s APIs and have some kind of cache where it can keep track of what it has moved.
There are plenty of ways of approaching this but one we might suggest is writing your ETL in a Lambda function which is triggered by a CloudWatch event. This would let you trigger a Lambda or multiple Lambdas at regular intervals until the job is done. The benefit of this is that you can scale to extract faster and there is no need to run a tool on your own machine.
In general, one advantage of building a custom ETL is that you can adapt to various source and destination points.
If you want to transition to a platform that does not expose a database endpoint for loading, then you can use that platform’s REST API to create accounts and load data. This will take longer of course, but you have that flexibility.
You ideally want this process to be slower than a direct transition because you will be reading from the live DB and extracting from there while the game is live. Excessive load on the DB will slow down the live service and may be flagged by GS internally, in which case you would be asked to stop the process, so it is safest to do this process slowly and safely in this case.
Beamable has a REST API which could be used for some simple account setup, but the better approach would be to create a transition Microservice. Microservices functions can be hit from external services, so these could be used to set up the correct structure in the new player account given the GameSparks player’s data.
Beamable will also facilitate direct transitions with customers on a case-by-case basis depending on the customer’s needs.
This SDK could be used to create your own transition microservice, reaching out to their backend and configuring player information specifically for your game’s needs.
AccelByte will also facilitate direct transitions with customers on a case-by-case basis depending on the customer’s needs so it is worth contacting them directly to see how they can help.
The brainCloud Portal offers the ability to import data from JSON files – this is supported for both Custom Entities and Global Entities.
This is suitable when importing static reference data – like level data, tuning files, etc.
Note - The allowed size of import files is limited – if you receive an error during the import, brainCloud support may be able to adjust the limit for you
Note that brainCloud also supports an S2S API, which may be helpful if you need a more custom approach for transitioning your app’s reference data.
For dynamic user data, it is highly recommended that new users be transitioned over during initial login to brainCloud. This has several benefits:
- Simpler – Importing a single user is simpler than importing all of them
- Scalable – It spreads the work of importing out, creating less load on both brainCloud and GameSparks
- Efficient – it ensures that only active player data is transitioned over to brainCloud. If your game is more than two years old, it’s likely that less than 25% of your stored player accounts are still active.
The recommended approach is to leverage the following brainCloud features:
- External Authentication – which allows brainCloud users to be authenticated via an external source – like your Gamesparks app
- API Post-Hook – which can be used after successful authentication, to trigger a script to retrieve the user’s GameSparks data
- HTTPClient service – used to make HTTP calls to external services (i.e. GameSparks)
More information on this sort of approach can be found in this article.