Uploads

Standard Upload

The conventional method for uploading files is best suited for smaller files that do not exceed 6MB in size. The use of the conventional multipart/form-data format in combination with the appizap-js SDK facilitates a straightforward implementation process. Below is an illustration demonstrating the process of uploading a file using the standard upload method.

import { createClient } from '@appizap/appizap-js'

// Create Appizap client
const appizap = createClient('your_project_url', 'your_appizap_api_key')

// Upload file using standard upload
async function uploadFile(file) {
  const { data, error } = await appizap.storage.from('bucket_name').upload('file_path', file)
  if (error) {
    // Handle error
  } else {
    // Handle success
  }
}

While the standard upload method allows for files up to 5GB in size, it is advised to utilize TUS Resumable Upload for greater reliability when uploading files larger than 6MB.

Overwriting Files

When a file is uploaded to a path that already contains a file with the same name, the standard response is to generate a 400 Asset Already Exists error. To replace an existing file in a designated directory, one can enable the upsert feature by setting it to true or utilize the x-upsert header.

It is recommended to avoid overwriting files whenever feasible, as our Content Delivery Network processes take a certain amount of time to disseminate the changes to all edge nodes, resulting in outdated content.

// Create Appizap client
const appizap = createClient('your_project_url', 'your_appizap_api_key')

await appizap.storage.from('bucket_name').upload('file_path', file, {
  upsert: true,
})

Uploading a file to a new path is the recommended way to avoid propagation delays and stale content.

Content Type

By default, the Storage feature will automatically assign the appropriate content type to an asset based on its file extension. If it is necessary to define a specific content type for your asset, you may do so by including the contentType option while uploading the asset.

// Create Appizap client
const appizap = createClient('your_project_url', 'your_appizap_api_key')

await appizap.storage.from('bucket_name').upload('file_path', file, {
  contentType: 'image/jpeg',
})

If multiple clients attempt to upload a file to the same location, the first client to finish the upload will be successful while the other clients will encounter a 400 Asset Already Exists error.

However, if the x-upsert header is included, the final client to finish the upload will be successful instead.

Resumable Upload

Appizap storage incorporates the TUS protocol for facilitating resumable uploads, with TUS standing for The Upload Server, a recognized open protocol designed to facilitate resumable uploads. This protocol permits the uninterrupted resumption of the upload process from where it had left off, allowing for seamless continuity in the event of any interruptions. This feature can be effectively implemented through the utilization of the tus-js-client library, as well as other client-side libraries such as Uppy-js that are compatible with the TUS protocol.

Here's an example of how to upload a file using tus-js-client:

const tus = require('tus-js-client')

const projectId = ''

async function uploadFile(bucketName, fileName, file) {
    const { data: { session } } = await appizap.auth.getSession()

    return new Promise((resolve, reject) => {
        var upload = new tus.Upload(file, {
            endpoint: `https://${projectId}.appizap.co/storage/v1/upload/resumable`,
            retryDelays: [0, 3000, 5000, 10000, 20000],
            headers: {
                authorization: `Bearer ${session.access_token}`,
                'x-upsert': 'true', // optionally set upsert to true to overwrite existing files
            },
            uploadDataDuringCreation: true,
            removeFingerprintOnSuccess: true, // Important if you want to allow re-uploading the same file https://github.com/tus/tus-js-client/blob/main/docs/api.md#removefingerprintonsuccess
            metadata: {
                bucketName: bucketName,
                objectName: fileName,
                contentType: 'image/png',
                cacheControl: 3600,
            },
            chunkSize: 6 * 1024 * 1024, // NOTE: it must be set to 6MB (for now) do not change it
            onError: function (error) {
                console.log('Failed because: ' + error)
                reject(error)
            },
            onProgress: function (bytesUploaded, bytesTotal) {
                var percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2)
                console.log(bytesUploaded, bytesTotal, percentage + '%')
            },
            onSuccess: function () {
                console.log('Download %s from %s', upload.file.name, upload.url)
                resolve()
            },
        })


        // Check if there are any previous uploads to continue.
        return upload.findPreviousUploads().then(function (previousUploads) {
            // Found previous uploads so we select the first one.
            if (previousUploads.length) {
                upload.resumeFromPreviousUpload(previousUploads[0])
            }

            // Start the upload
            upload.start()
        })
    })
}

The resumable upload method is recommended when:

  • Uploading large files that may exceed 6MB in size

  • Network stability is a concern

  • You want to have progress events for your uploads

Upload URL

When utilizing the resumable upload endpoint, the storage server generates a distinct URL for each upload, including multiple uploads to the same location. The PATCH method is employed to upload all chunks to this URL.

This exclusive upload link will remain valid for a period of 24 hours. In the event that the upload is not finalized within this time frame, the link will become invalid, necessitating a restart of the upload process. TUS client libraries often generate a fresh link if the previous one becomes obsolete.

Concurrency

If multiple clients attempt to upload to the same URL simultaneously, only one client will be successful while the others will encounter a 409 Conflict error. This limitation ensures that only one client can upload to the same URL at any given time, thereby minimizing the risk of data corruption.

When multiple clients upload a file to the same location using distinct upload links, the client that finishes the uploading process first will be successful, while the others will encounter a 409 Conflict error.

When the x-upsert header is included, the most recent client to finish uploading will be the one to achieve success.

Overwriting Files

When a file is uploaded to an existing path, the standard response is to issue a 400 Asset Already Exists error. To replace a file at a designated path, the x-upsert header can be enabled by setting it to true.

It is recommended to avoid overwriting files whenever it is feasible, as the Content Delivery Network (CDN) requires a certain amount of time to distribute the modifications to all the edge nodes, resulting in outdated content.

Uploading a file to a new path is the recommended way to avoid propagation delays and stale content.

S3 Upload

Single request uploads

The PutObject operation facilitates the uploading of a file in a single request, aligning with the functionality of the SDK Standard Upload. Utilize the PutObject function for uploading smaller files, as handling retry attempts for the entire upload process will not be problematic. Files larger than 50 GB cannot be uploaded on paid subscription plans.

For example, using JavaScript and the aws-sdk client:

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'

const s3Client = new S3Client({...})

const file = fs.createReadStream('path/to/file')

const uploadCommand = new PutObjectCommand({
  Bucket: 'bucket-name',
  Key: 'path/to/file',
  Body: file,
  ContentType: 'image/jpeg',
})

await s3Client.send(uploadCommand)

Multipart uploads

Multipart Uploads divide a file into smaller segments and uploading them simultaneously to optimize upload speed on high-speed networks. This feature enables users to reattempt the uploading of specific segments in the event of network disruptions when handling large files.

This approach is more favorable than Resumable Upload for server-side uploads, especially when prioritizing upload speed over the ability to resume. The maximum file size permitted on paid plans is 50 GB.

Use the Upload class from an S3 client to upload a file in parts

import { S3Client } from '@aws-sdk/client-s3'
import { Upload } from '@aws-sdk/lib-storage'

const s3Client = new S3Client({...})

const file = fs.createReadStream('path/to/very-large-file')

const upload = new Upload(s3Client, {
  Bucket: 'bucket-name',
  Key: 'path/to/file',
  ContentType: 'image/jpeg',
  Body: file,
})

await uploader.done()

Multipart uploads will be terminated automatically within a 24-hour timeframe. Should there be a need to abort a multipart upload prior to the designated time limit, the AbortMultipartUpload action can be utilized.

Last updated