Introduction

This topic is intended to give GameSparks developers a plan for moving their current downloadables to a custom-built service on AWS. This topic assumes that the developer does not have prior knowledge of AWS services so we will be going into as much detail as possible to clear up any confusion. We will provide links to additional resources or AWS tutorials where possible to allow you to fill in any knowledge gaps for yourself.

GameSparks Downloadables use AWS S3. We don't need to go into too much detail about what S3 is because it's a very simple service and it's all managed by AWS. S3 in essence is a folder where you can upload any file or files you want. In S3, we call these folders “buckets”.

When you upload a file through the GameSparks portal, what is actually taking place is that GameSparks is authenticating with an S3 bucket created for your game and uploading your files to that bucket.

When you call the GetDownloadableRequest, GameSparks is also reaching out to S3 to create a signed, short-lived URL where you can access the file. This is what gets returned from that request.

However, we don't have to go through this flow if we use the AWS SDK. Instead, we can go directly to S3 to get the data so that is what we are going to cover in this tutorial. We will need an additional AWS service called Cognito in order to do this. Cognito lets us authenticate with AWS so we have the right permissions to access our files. We will also take a look at another AWS service called IAM (identity and access management). We will use an IAM policy to restrict access to the S3 bucket to only reading single files from the client. This means that the client cannot search files or upload them.

We also don't have to keep to any GameSparks file-limitations with a custom S3 bucket, though there is a 5TB file limit which should be enough for most developers.

Before starting anything you will need to set up a new AWS account. We won't cover this here but it is a simple process. Just go to aws.amazon.com to sign-up.

Note - The services we are going to be discussing in this topic are not free so we encourage you to check out the pricing yourselves to see if it fits your applications workload. You can see more information about pricing for Cognito and S3 linked respectively.

Cognito Setup

We are going to be setting up a flow between our client and AWS for getting files from S3. Our primary concern when setting this up is security. This is where AWS Cognito comes in. Cognito manages authentication between AWS services and users of your app. There is a lot you can do with this service but we are going to keep things as simple as possible for this topic.

The first thing we are going to do is go to the Cognito service in the AWS dashboard and create a new Identity Pool.

The only thing we really care about in this topic is whether we want to give unauthenticated access to users or not. To keep this example simple we will allow unauthenticated access. This will let any of the users from your app to download files from your S3 bucket.

The alternative would be that the player must sign-in with Cognito in order to download files. This is possible using Cognito and you can see some examples here for how to get that set up but we won't do that for this topic.

You can tick the “Enable access to unauthenticated identities” and proceed to click the “Create Pool” button.

On the next page you can skip the config and just click on the “Allow” button. You will then be presented with something like this.

You need to select “Unity” from the Platform drop-down first. This will change the “Get AWS Credentials” field. You can actually ignore most of this. All you need to do is take a note of the Identity pool Id marked in red, and your region. We’ll use those later in our Unity setup.

S3 Setup

The next step is to create a new S3 bucket for your game. Navigate to the S3 service and click on the “Create Bucket” button.

There are two options at the top for the bucket name, and the region. Go ahead and fill those out.

You’ll notice a section titled “Block Public Access settings for this bucket”. At the moment we want to leave that as it is. We only want access from Cognito for now but that is something you can look into yourself. There is a guide on that here.

Note - Bucket names must be globally unique. This means that the name you choose has to be unique across all buckets in every AWS account and across all regions. In other words something like “test1” or “myGameBucket” is not going to be allowed. The best thing to do is use something like a domain name, unique to your company and project.

Note - There are other details and settings available when you create an S3 bucket but we are going to skip those for simplicity. There is a guide here on best-practices with S3 which you can check out for more tips. One thing you could do is give your bucket some tags. Tags let you group services and instances by a common tag. This makes it easier to search for certain features and see what is connected to what, but also allows you to see which applications your bills are coming from and how much your application is costing you so we highly recommend this.

Uploading Files

The next thing we are going to do is upload some files.

We have a section at the bottom of this topic on exporting downloadables so the flow here would be manually uploading your GameSparks downloadables to S3. There are other ways of doing this but this method requires no additional setup.

Go ahead and upload your file. You don't need to worry about any parameters or options available when uploading the file for the moment. Something to note is that you’ll want to use the same “shortCodes” as you used for your GameSparks downloadables. These will become your files names in the bucket so make sure to change them if the downloadable shortCode is different to the file-name so you dont have to change your code later.

Once your file is uploaded it will bring you back to your bucket menu.

Other Options

There are a lot more features to S3 than what we just covered. Updating and deleting files are things you’ll do often while transitioning your files to the new bucket. We won't cover those here as they are pretty simple.

For a walkthrough of more complicated options and features you can check out a guide here.

Cognito Permissions & IAM

The next thing we need to do is set up our Cognito pool to have permission to access our S3 bucket. We are going to do this by modifying the role created for our Cognito pool. In this case the Role we are talking about is something in AWS that defines the permissions that one service can have in relation to another, i.e. access to data, APIs, or configuration settings.

Our Cognito identity pool role was created automatically and you can see the name of the role it created by going to the “Edit Identity Pool” option in the identity pool dashboard.

And from here we are looking specifically at the “Unauthenticated role”. If you remember in the previous section we made sure that our identity pool did not need authentication for access.

IAM Role

Now that we know the name of our role, go back to the AWS service menu and look for the IAM service.

We need to find our Role in the IAM service menu. There are usually a lot of auto-generated and default roles so you may need to search for yours.

You can click on that role to edit it.

From here things are a little complicated so we need to be careful not to edit anything accidentally.

From here we want to check on the “Attach Policies” button.

On the next screen you will see a button at the top called “Create Policy”, click on that. From here you want to go to the JSON tab as it is a simpler set up for what we need to do.

There is a visual editor here but in our case we only want read access to objects in our bucket so we only need to allow a single action. The visual editor is better for setting more complex groups of permissions.

There is a lot to take in here, but the essence of this JSON is that we are telling S3 that any service with this policy attached has access only to “Get” objects in the specific bucket we defined (The “Resource” field) and nothing else. We call this field the ARN (Amazon Resource Name) and you can find it by going back to S3 and clicking on the properties tab. You will see it at the top of the page.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::<the-arn-of-your-bucket>",
                "arn:aws:s3:::*/*"
        }
    ]
}

Attaching this policy to the Cognito identity pool role means the pool can access the bucket. So back on the Role, we can attach this policy.

Now we should be ready to go!

Unity Example

The first thing we need to do in Unity is download and install the AWS services SDKs we need for this topic. We are going to need the Core SDK, along with S3 and CognitoIdentity. The best way to do this is through the NuGet package manager for Unity.

From here you can just search for the packages you need and install them directly.

There is also an AWS Mobile SDK which you often get linked to while looking for solutions to importing AWS SDK problems. We are not going to use this for this example.

Setup Cognito

The first thing we are going to do in our Unity project is set up our Cognito authentication as we will need that to access S3.

We are going to create a new script called CDNManager. If you were utilizing AWS for other components of your project you might call this AWSManager instead, but in this case we want to show a very simple example which you can expand yourself as you please.

In the previous section where we set up our Cognito Identity pool, we asked you to take note of the identity pool Id and region. We are going to add these as params in our script, along with some code that will set up our user’s Cognito credentials.

/// <summary>
/// The Id of the Cognito identity pool
/// </summary>
private string IdentityPoolId = "<your-pool-id>";
/// <summary>
/// AWS region where our Cognito identity pool is
/// </summary>
private RegionEndpoint Region = RegionEndpoint.<your-region>;
/// <summary>
/// Used to authentication the user with AWS
/// </summary>
private CognitoAWSCredentials _credentials;
/// <summary>
/// Getter for our credentials
/// </summary>
public CognitoAWSCredentials Credentials
{
   get
   {
       if (_credentials == null)
       {
           // initialize the credentials //
           Debug.Log("Initializing Cognito Credentials...");
           _credentials = new CognitoAWSCredentials(IdentityPoolId, Region);
       }
       return _credentials;
   }
}
void Start()
{
   // This is needed to allow threading to work for the Unity SDK //
   UnityInitializer.AttachToGameObject(this.gameObject);
   // We need this to setup web-requests with the AWS SDK //
   AWSConfigs.HttpClient = AWSConfigs.HttpClientOption.UnityWebRequest;
   // Now we can get the user's cognito identity Id //
   Credentials.GetIdentityIdAsync(delegate(AmazonCognitoIdentityResult<string> result)
   {
       if (result.Exception != null)
       {
           Debug.Log(result.Exception);
       }
       string identityId = result.Response;
       Debug.Log("Cognito user logged in...");
       Debug.Log("Identity ID: "+identityId);
   });
}

Note - A few things to note here are that we must attach the gameobject which contains this script to the UnityInitializer as this will automatically add the thread dispatcher to the object which allows for async calls to AWS.

Cognito Identity Id

The main result of this authentication is getting us the IdentityId.

This is like an auth-token we can use to securely communicate with AWS services.

You don't need this for anything else in the topic because once we have it, the SDK takes care of all future authentication under-the-hood.

Downloading S3 Files

The next step is to create our code for downloading S3 files.

We will begin by adding some more parameters to our CDNManager script just under our Cognito params.

We need to set up our S3-Client, and for that we need a reference to the bucket name.

/// <summary>
/// The name of our S3 bucket
/// </summary>
private string _bucketName = "<your-bucket-name>";
/// <summary>
/// Amazon S3 Client
/// </summary>
private AmazonS3Client _s3Client;
/// <summary>
/// S3 Client instance getter
/// </summary>
public AmazonS3Client S3Client
{
   get
   {
       if (_s3Client == null)
       {
           Debug.Log("Setting up S3 client...");
           _s3Client = new AmazonS3Client(new CognitoAWSCredentials(IdentityPoolId, Region), Region);
       }
       return _s3Client;
   }
}

And next we need to download the file. We will do this as a byte array using a stream reader. There are many ways to do this, so we will just provide a simple example.

/// <summary>
/// Downloads a file from S3
/// </summary>
/// <param name="shortCode">File name of S3 file</param>
private void DownloadFromS3(string shortCode)
{
   Debug.Log($"Downloading file [{shortCode}]...");
   S3Client.GetObjectAsync(_bucketName, shortCode, (s3Response) =>
   {
       Debug.Log($"Found File: {s3Response.Response.Key}...");
       byte[] byteArray = null;
       // lets get this object back as a steam //
       using (StreamReader reader = new StreamReader(s3Response.Response.ResponseStream))
       {
           // [1] - First stream will read the data from the s3 object into a byte array //
           using (MemoryStream memStream = new MemoryStream())
           {
               var buffer = new byte[50000];
               var bytesRead = default(int);
               while ((bytesRead = reader.BaseStream.Read(buffer, 0, buffer.Length)) > 0)
               {
                   memStream.Write(buffer, 0, bytesRead);
               }
               byteArray = memStream.ToArray();
           }
           // [2] - Convert byte array to image //
           ConvertDownloadToSprite(byteArray);
       }
   });
}

For our example we uploaded a PNG file. This file is a texture, but when we try to download the file we will get a byte-array from the stream.

We therefore need to convert our byte-array to a texture. This is pretty simple in Unity, but just remember that it could be more complicated for your own files or JSON files.

Here is our example for converting this byte-array into a texture and then applying it to an object in our scene.

/// <summary>
/// Converts the byte-array to a sprite and assigns it to an image in our scene UI
/// </summary>
/// <param name="byteArray"></param>
private void ConvertDownloadToSprite(byte[] byteArray)
{
   Texture2D myImage = new Texture2D(0, 0);
   myImage.LoadImage(byteArray);
   myImage.Apply();
   sceneImage.sprite = Sprite.Create(myImage, new Rect(0.0f, 0.0f, myImage.width, myImage.height), Vector2.up);
}

You can see from the simple example below, along with the console logs we’ve been adding to our code how the texture is downloaded and applied.

Solution Without Unity AWS SDK

What do you do if your application is not using Unity, or you need to maintain the GameSparks flow of getting short-lived URLs instead of using Cognito for authentication?

This is also pretty simple but we will only cover this in brief for this topic.

REST API

GameSparks generates its pre-signed URLS over REST using the AWS REST API. This is not too different from the setup we have explained above. The main difference is that we need to get authorization before we can ask for the signed URL because we aren't using Cognito with this flow.

The result however, is the same as with GameSparks, where you can get a URL you can use to download your content for a limited period of time.

Node.js

If you are using Node.js, either for a Lambda integration or your own custom backend, we highly recommend using the AWS Node.js SDK. In Node you can install this using the npm package manager and setting it up is very simple.

There is one additional step which we need for this flow however. We need to create a new API-user in the AWS IAM service. This user is the backend server so it will need the same policy we created for Cognito (presuming you only want your backend to “Get” URLs and not also be able to upload or list them).

For Node, you can then get a pre-signed URL with the following code…

   /**
    * Returns a url that can be used to download this content
    * @returns {string}
    */
   async getUrl(){
     return new Promise((resolve) => {
       /** @type {AWS} */
       const AWS   = Gimp.getConfig().getAWS();
       /** @type {AWS.S3} */
       const s3    = new AWS.S3(SystemService.getAWSCredentials());
       /** @type {object} */
       const params = {
         Bucket:  <bucket-name>,  /
         Key:     <file-name/short-code>,    
         Expires:  <time-in-seconds>                               
       };
       s3.getSignedUrl('getObject', params, function (err, url) {
         resolve(url);
       });
     });
   }

Lambda Function

Without a backend server, you can still use both the REST API and the Node.js examples above. Lambda functions through API Gateways are a nice way to take the load off your client and backend, and keeps any AWS credentials like your bucket name secure.

Remember however, if you want to use a Lambda function you also need to add a role to that Lambda function in order to give it permission to read from the bucket in the same way as we did for the Cognito identity pool.

Exporting Downloadables

Unfortunately there is no easy way to do this, so you have two options:

  1. Create an automatic script that will download your existing downloadables from GameSparks using GameSparks’ REST API.

This isn't too difficult, depending on the application you use. You can also create a custom application in something simple like Go, JS or Python which could download from GameSparks and upload to S3 using the AWS SDKs for each of those languages.

  1. If you don't have many downloadables to move you can always do your transition manually. We already showed how to upload files to your bucket, but you can also download existing files from GameSparks by going to the file in the portal and hitting the download button.