Squarebox

This plugin supports the transfer of files between CatDV and Amazon S3, (simple storage service). In addition, it supports the restore of files that were automatically transferred to Glacier cold storage from S3 via bucket lifecycle policies.

This S3 plugin does NOT support the direct transfer of files between CatDV and a Glacier Vault. Glacier vaults are a separate type of storage which is entirely independent of S3 and would need to be handled by a different plugin.

Before you start

In order to use this Amazon S3 archiving plugin you need:

CatDV Server 8.0.3 or later

CatDV Pegasus 13.0.5 or later

CatDV Plugin licence 'ARCS3' (Rest API licence with multiple sessions)

To trigger Amazon S3 cloud file transfers from the Worker you need:

S3 Worker Plugin 2.1.2, included in this installation as AmazonS3Worker.catdv

In addition, to run the archive service standalone (outside the server):

CatDV Service Control Panel 1.2 or later

New Installation

IMPORTANT: If you don't already have an Amazon S3 account or you don't know the access keys see 'Amazon S3 setup' for further details.

Copy the whole directory extracted from this zip to the plugins directory:

e.g. on Unix systems: /usr/local/catdvServer/plugins e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\plugins

Set optional server properties for the plugin, if desired:

Open 'Server Config' from the control panel

Enter required properties into the CatDV Server tab, 'Other' field. See 'Plugin server properties' below and 'Running archive service standalone', if applicable and 'Using KMS encryption', if applicable.

Restart the CatDV server from the control panel

Open the CatDV client (or log the client back in to the server)

Configure one or more service accounts. The first account will always be the default account, used to verify that S3 is available and used as a fallback when necessary. See 'Manage service command' below for information on the details and settings for a service account:

In the client, run Tools->Manage Amazon S3 Archive Service

On the accounts tab enter the service account details. Mandatory details are flagged with an asterisk. Note that once a service account has been used to archive a clip, it will no longer be possible to update its identifier.

Optionally modify the defaults on the settings tab (These are mostly applicable to 3rd party S3 compatible services).

Click 'Add'.

If S3 plugin is being used as a bridge for a 3rd party service, e.g. Dell ECS, ensure that the Service Name and Service Description are set appropriately via the UI tab of the ‘Manage Amazon S3 Archive Service’ command.

If using media store service mappings to automatically map clips to archive locations, they need to be set up. See 'Media store service mappings' below.

Optionally configure archiving, processing and UI settings for the plugin via the corresponding tabs on Tools->Manage Amazon S3 Archive Service.

If running the archive service standalone (configured via a server property in step two), then configure and start the archive service via the service control panel. See 'Running archive service standalone'.

Verify the service setup: In the client, run Tools->Manage Amazon S3 Archive Service again. The service status for the default account should be 'Running (online)'. The status may be 'Running (offline)' if Amazon S3 is not currently accessible.

IF the Worker IS NOT being installed

Delete the Amazon S3 worker plugin file with extension '.catdv' from the Amazon S3 plugin directory (see first step)

IF the Worker IS being installed:

Move the Amazon S3 worker plugin file with extension '.catdv' FROM the Amazon S3 plugin directory installed in the first step TO the worker plugins directory:

e.g. on a Mac: /Library/Application Support/Square Box/Extensions e.g. on Windows: %ProgramData%\Square Box\Extensions e.g. on Linux: /usr/local/catdvWorker/extensions

If the worker is running, re-connect to the server (Edit Config / CatDV Server / Reconnect) so that the archive plugin fields are available.

Verify that the Amazon S3 worker actions are available. It may be necessary to quit and re-open the worker to pick up the new worker actions.

Upgrade Installation

Make a note of the current plugin version from this document(or see latest version in ‘Release Notes’)

If running the archive service standalone, use the service control panel to stop the service.

Stop the CatDV server from the control panel

Copy the whole directory extracted from this zip to the plugins directory:

e.g. on unix systems: /usr/local/catdvServer/plugins e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\plugins

Remove or move the directory / files for the prior plugin from the plugins directory.

Carry out any 'Upgrade' instructions for each plugin version listed in the Release Notes (last section of this document) above the last installed version, working back from the oldest applicable version.

Start the CatDV server from the control panel.

Read through the 'Changes' for each plugin version listed in the Release Notes (last section of this document) above the last installed version, and go to Tools->Manage Amazon S3 Archive Service to verify that the details / settings for each account and the archiving / processing / UI settings for the plugin are correct for this installation. See 'Manage service command' below for more details.

If running the archive service standalone (configured via a server property in step two), then update and start the archive service via the service control panel. See 'Running archive service standalone'.

Open the CatDV client (or log the client back in to the server).

Verify the service setup: In the client, run Tools->Manage Amazon S3 Archive Service again. The service status should be 'Running (online)'. The status may be 'Running (offline)' if Amazon S3 is not currently accessible.

Update (or delete) the Amazon S3 worker plugin by following the instructions from the last step for a new installation, making sure to move / remove the .catdv file for the prior version of the plugin from the extensions directory.

Plugin server properties

When running the archive service "InPlugin", the following server properties must be set:

catdv.s3archive_service.licence_code = <generated licence code for plugin>

When running the archive service standalone, the following server properties must ALSO be set:

catdv.s3archive_service.service_mode = Standalone

When running the archive service standalone, the properties for the archive service must include:

catdv.s3archive_service.licence_code = <generated licence code for plugin> catdv.s3archive_service.service_mode = Standalone catdv.rest_api.host = <CatDV server hostname> catdv.rest_api.port = <CatDV API port> catdv.rest_api.user = <CatDV API user> catdv.rest_api.password = <CatDV API user password> (** exclude if blank, for dev)

Typical rest API values for a development machine would be (password property omitted for a blank password):

catdv.rest_api.host = localhost catdv.rest_api.port = 8080 catdv.rest_api.user = Administrator

In addition, the following optional property may be set to turn on debug logging on both the server and standalone service:

catdv.s3archive_service.debug = true

Running Archive Service Standalone

By default, the service that handles archive jobs runs inside the plugin. To run it as a standalone process, ensure that the service mode property is set to Standalone in the server when installing the plugin (see second step of installation) as follows:

catdv.s3archive_service.service_mode = Standalone

The following is required to run the plugin's archive service on a separate machine from the CatDV server:

CatDV Service Control Panel 1.2 or later

To configure and start the standalone archive service using the service control panel:

Verify that at least one service account has been configured in the client / web plugin UI

Install the CatDV Service Control Panel

Open the CatDV Service Control Panel

Click 'Service Config' to open the service configuration

Enter license details on the Licensing tab

Enter required service properties on the Settings tab. See 'Plugin server properties' above.

On the Installation tab, note the install location ('Server Install Path')

On the file system, navigate to the install location and copy the plugin files into the plugins sub-directory.

Click 'Start' to start the archive service.

To update the standalone archive service:

Open the CatDV Service Control Panel

Add / edit service properties on the Settings tab if applicable.

Go to the install location (see Installation tab) and replace the plugin files in the plugins sub-directory with those for the latest version of the plugin.

Click 'Start' to start the archive service.

Amazon S3 Setup / Configuration values

In order to use the plugin, an Amazon AWS account is required. If you need to set up an Amazon AWS account for testing purposes, follow the link 'Try Amazon S3 for free' from http://aws.amazon.com/s3/. Then the most straightforward way to proceed is to use an existing Amazon account.

NB - The account is only free for *limited usage for a one year period*, see https://aws.amazon.com/s3/pricing/

Once you are logged into an AWS account, http://aws.amazon.com/s3/ will take you to the S3 console. In order for the plugin to copy to / move to / restore from your AWS account, you will need to set up a user with access keys and at least one bucket:

To set up a user:

Open the AWS Identity and Access Management Console: https://console.aws.amazon.com/iam/home#home

Follow the steps under security status to create a User AND create a Group containing that user which has the permission AmazonS3FullAccess.

To get access keys for a user - these are the access key and secret key required to configure the plugin:

Go to https://console.aws.amazon.com/iam/home?#users

Click on the user name

Click on the ‘Security Credentials’ tab

Click ‘Create Access Key'

Either Click 'Download Credentials' or click 'Show User Security Credentials' and copy the key values.

To create a bucket:

Go to http://aws.amazon.com/s3/

Click 'Create Bucket'

Enter bucket name and click 'Create'

This is the bucket name required to configure the plugin. The value for the region can be seen in the URL when viewing the bucket. This must be a region code from the following list (default is us-east-1):

US East (N. Virginia) / US Standard: us-east-1 US West (Oregon): us-west-2 US West (N. California): us-west-1 EU (Ireland): eu-west-1 EU (Frankfurt): eu-central-1 Asia Pacific (Singapore): ap-southeast-1 Asia Pacific (Tokyo): ap-northeast-1 Asia Pacific (Sydney): ap-southeast-2 Asia Pacific (Seoul): ap-northeast-2 South America (Sao Paulo): sa-east-1

The values required to configure the service in the Amazon S3 plugin are:

region/endpoint - region code (above) OR endpoint for an S3 compatible archive

access key - access key from S3 credentials of Amazon S3 user

secret key - secret key from S3 credentials of Amazon S3 user

default bucket - name of an accessible bucket on Amazon S3 which can be used to verify the connection, used as the default bucket the first time a user attempts to copy / move files to Amazon S3

CMK ID - customer master key for authorising client side KMS encryption, if applicable

KMS Key ID (optional) - KMS key ID for authorising server side KMS encryption, if applicable

Amazon Snowball Edge

With the Snowball add-on in conjunction with the multiple accounts add-on, the plugin can archive to an Amazon Snowball Edge on site and restore from the new location once the files have been moved to Amazon S3.

The snowball add-on includes:

Snowball flag on archive accounts

Auto-set pathStyleAccess / disableChunkedEncoding for snowball accounts

Bypass the TransferManager for archives using snowball accounts, single / multi-part upload depending on file size. Note multiple parts are uploaded contiguously

Ability to override the service account on restore

Support for Snowball is enabled as an add-on via a special config property.

Using KMS Encryption

NB - The support for KMS encryption only works with Amazon S3, not with S3 compatible interfaces to other cloud services.

The Amazon S3 plugin supports the use of KMS encryption to protect your data either at rest on S3 (server) OR both in transit to/from S3 and at rest on S3 (client). If you turn on server encryption, the data will be encrypted and decrypted by and on S3. If you turn on client KMS encryption, the plugin will use the Amazon S3 encryption client to interact with S3, rather than the standard client, and all data will be encrypted in the plugin before it is sent to S3 and decrypted in the plugin when it is retrieved from S3.

For more details of Amazon's implementation, see "Using the Amazon S3 Encryption Client" in:

https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

To use KMS encryption:

Create a service account, or update an unused account, setting the 'KMS Encryption' to client or server, as required:

The 'client' option encrypts your data locally to ensure its security as it passes to the Amazon S3 service, where it remains encrypted. The S3 service receives your encrypted data and does not play a role in encrypting or decrypting it. The 'server' option enables data to be encrypted in storage on S3 but not in transit. The S3 server encrypts and decrypts the data.

For 'client', a customer master key (CMK) ID for encryption must be provided via the Manage service command and the user associated with the S3 credentials must have permission to use the CMK for encryption and decryption.

For 'server', a kms key ID may optionally be provided via the manage service command (otherwise the default kms key will be used).

The encryption settings for a service account cannot be changed one the account has been used to archive files, otherwise subsequent restores will be unable to decrypt those files.

For 'client' client encryption, install JCE (Java Cryptography Extension) unlimited strength policy files

Download the files from http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Unzip download

Copy the two jar files from the unzipped directory to jre/lib/security for the plugin installation:

i.e. for archive service run 'InPlugin' (default):

e.g. on unix systems: /usr/local/catdvServer/jre/lib/security/ e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\jre\lib\security

i.e. for archive service run 'Standalone':

e.g. on unix systems: /usr/local/catdvService/jre/lib/security/ e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Service\jre\lib\security

Create Media Store service mappings to determine which clips will be archived using the encrypted service account

Using the plugin

The plugin comprises various commands that are available from the Tools menu in the client.

File transfers are initiated by the schedule backup / archive / restore commands but are carried out by a separate process. This means that the client does not automatically reflect updates to a clip on completion of a transfer. Use 'Server->Refresh Window' in the client to pick up any changes during / after transfer.

The plugin includes the following commands:

Command

Description

Manage Amazon S3 Archive Service

View full status details of the archive service and manage the service account(s) required to access S3 archives. Service account details include region / endpoint, access key, secret key, default bucket name and optional CMK ID. If a connection to Amazon S3 cannot be made an error is displayed and the values are not saved.

View Amazon S3 Archive Service Job Queue

Lists jobs, which can be filtered by status (Pending, Complete, Failed, All). It provides the capability to view job contents, view job results and cancel waiting or queued jobs.

Schedule copy to Amazon S3

Adds a copy job for each clip selected (if multiple clips reference the same source media only one job is created). Copy jobs can be scheduled when the archive service is not running or is offline but will only be run when the archive service is online (running and Amazon S3 is accessible). When the copy job is run, the source media associated with the clip is copied from the file system to Amazon S3 and the original file is preserved.

Schedule move to Amazon S3

As copy but on successful transfer of the file to storage the job attempts to delete the original file. If for some reason the deletion fails, this does not affect the job status (or archive status of the clip) but "Purge failed" is recorded in the job data.

Schedule restore from Amazon S3

Adds a restore job for each clip selected (if multiple clips reference the same source media only one job is created). Restores can be scheduled when the archive service is not running or is offline but will only be run when the archive service is online (running and Amazon S3 is accessible). When the restore job is run, the source media associated with the clip is copied from Amazon S3 to the file system.

Purge files archived to Amazon S3

Deletes the source media associated with the selected clips if they have been successfully archived.

Managing Service Accounts

The accounts tab on the ‘Manage Amazon S3 Archive Service’ command provides facilities to manage the service account(s) used to connect to Amazon S3 archives. A single account is usually sufficient to operate the plugin. However, if required, there is an add-on which enables multiple accounts to be set up for security purposes etc. This is enabled as an add-on via a special server config property.

When there are multiple accounts, the account used may be determined automatically via ‘Media Store Service Path Mappings’ (see later section) or selected by the user on archive / restore if allow override of Account ID is turned on from the UI tab of the manage service command (see next section).

An account which is in use, i.e. has been used to archive files, must be unlocked before it can be updated or deleted. This is to ensure that accidental changes cannot be made to an account such that files can no longer be restored. Note it is not possible to change the account identifier on an in-use account.

The accounts tab provides the following operations:

Button

Description

Clear

Clears the current selection / details so that only default values are filled in.

Set Default

Updates the default account to the current selected account.

Add

Creates a new account with the specified details.

Unlock

Unlocks an in-use account in preparation for update / delete.

Update

Updates the selected account with the specified details.

If an account is in use, it must be unlocked before it can be updated. Please take care when updating an account which is in use, as changing some settings could break the restore of files archived with that account. For example, if the account key is switched to one which does not have the same access or if any encryption settings are changed.

Note it is not possible to change the account identifier on an in-use account.

Delete

Deletes the selected account.

Please take care when deleting an account which is in use, as it will no longer be possible to restore any clips arched with that account. This feature is intended for removing test accounts and any applicable clips should be deleted or restored and re-archived.

Manage service command

The following can be configured from the Manage Service command in the web or desktop UI for the plugin:

ACCOUNTS TAB / DETAILS (for an account):

Field

Description

Account Identifier

Identifying name for this service account. This may contain only alpha-numerics and hyphens. Note it is not possible to change the account identifier on an in-use account.

Region

Region for S3 or endpoint for a 3rd party S3 compatible service

Access Key

Access key for connecting to this service account

Secret Key

Secret key for connecting to this service account

Default Bucket Name

The bucket that will be used as the archive fallback for this account by default. This is not applicable when a media store service mapping applies. Otherwise, if the UI tab settings allow the user to override the bucket then this will only be used as the default the first time the user does an archive. Subsequent archives by the same user will default to the last value they entered.

Default Glacier Tier

The glacier tier that will be applied to restores by default. If the UI tab settings allow the user to override the glacier tier, then this will only be used as the default the first time a user does a restore. Subsequent restores by the same user will default to the last value they selected.

KMS Encryption

Determines whether client, server or no KMS encryption is used. See section on 'Using KMS Encryption' above.

KMS Key ID / CMK ID

Must be provided if client side KMS encryption has been selected (KMS Key ID) OR may be specified if KMS encryption is configured for a bucket in S3 and you need to use a CMK ID other than the default.

ACCOUNTS TAB / SETTINGS (for an account):

Glacier restore expiration (days)

The expiration period, as a number of days, for which a file restored from Glacier deep storage will be accessible directly from S3 (restore from Glacier is always temporary). The default is 2 days.

Host addressing

Determines whether the plugin uses virtual host addressing or resource path addressing to connect with the S3 service. Valid values are 'virtual' or 'pathStyle'. The default value is 'virtual'.

Signer override

Sets the name of the signature algorithm to use for signing requests. If not set or explicitly set to null, the Amazon S3 SDK will choose a signature algorithm to use based on a configuration file of supported signature algorithms for the service and region. The default is not set, which should ideally not be overridden for the Amazon S3 service. However, other archive services which provide an S3 interface may need to set this property, e.g "S3SignerType".

Manually create target path

Ensures that the 'folders' in the target path are created prior to archiving a file.

This is not applicable to S3 itself, which has no true concept of 'folders'.

Service Name

Enables the override of the plugin service name in job details and clip archive details using this service account. The service name used in plugin menu items and labels in the UI is the one from the UI tab. The only time this override should be used is when service accounts are set up to access multiple different S3 compatible services.

ARCHIVING TAB:

Location mapping

Fallback method for generating the archive file location on S3 if no media store service mapping applies. The default is 'batchMirror'. Valid values are:

batchMirror: The file(s) selected for archive are batched together in a date and time stamped directory (format /yyyy-mm-dd/hh:mm:ss.mmm) and mirror their source file path(s) within the batch directory. This has the effect of versioning uploads to S3 as a new copy is written for each transfer.

mirror: The archive location always mirrors the source file path which means a file is replaced each time it is archived. The downside of this approach is that it does not cater for files with the same path on different file systems.

mediaStoreRelative: Generates relative file paths from the relevant media store(s). If no matching media store path is found, the archive location is generated using the 'mirror' approach.

mediaStoreAbsolute: As 'mediaStoreRelative' but also prepends the path with the name of the media store.

Purge directories

Determines whether or not empty directories are deleted when purging (or moving) files.

PROCESSING TAB:

Concurrent job limit

Determines the number of transfers that the archive service will attempt to run concurrently. The default value is 4.

Number Delayed retries

The number of times a waiting job is retried at an interval which increases exponentially. The default value is 10, which is equivalent to roughly 34 hours elapsed running time of the archive service.

Maximum retry delay (time period in milliseconds)

Limits the maximum time period between retries of waiting jobs. The default value is 0, which represents no maximum value.

Max jobs running delay (time period in milliseconds)

Determines the time period to wait before attempting to run another job when the maximum number of transfers are already running. The default value is 0.

This would typically be increased (either when a typical transfer is likely to be slow or when the concurrent transfer limit is greater than 1) in order to reduce hits to the database to query jobs whilst max transfers are running.

Loop delay (time period in milliseconds)

Determines the frequency with which the archive service checks the Job queue and attempts to run outstanding Jobs. The default value is 30000, equivalent to 30 seconds

Retry running job delay (time period in milliseconds)

Determines the time period after which a Job which is running will be restarted if it has not been updated during that period. Defaults to a value equivalent to 1 hour

Stalled job delay

Determines the time period since a running job's last progress update, after which its status is flagged as stalled. Defaults to a value equivalent to 1 minute

UI TAB:

Service Name

This property may be used to customise the name for an S3 compatible service such as Google Cloud Storage. Note that plugin menu items will not be updated until the UI has re-connected to the server (i.e. the plugin has been reloaded) and that it will not change the service name in any existing jobs or clip archive details.

Service Description

This property may be used to customise the description for an S3 compatible service such as Google Cloud Storage.

Restrict command access

Restricts the specified plugin commands to sys admin users. Can be used to hide 'config' commands only (i.e. Manage Service), ‘config / archive’ (i.e. to restrict copy or move to archive but not restore) or 'all' commands. Default is ‘none’.

Days to display

The number of days into the past for which jobs are listed in the job queue.

Any job updated in this time period is listed. The default value is 10.

Max no jobs to display

The maximum number of jobs which will be listed in the job queue. This overrides days_to_display. The default value is 1000.

Allow overrides

Enables the facility for users to override one or more parameters at the point of archive or restore.

Media Store Service Path Mappings

Media Stores can be used to automatically map clips to their archive locations by adding a service path to a media store. The service path determines the service account, bucket and (optional) folder path for the archived files. For Amazon S3 a service path has the format:

<serviceType>://<accountIdentifier>/<bucketName>

or

<serviceType>://<accountIdentifier>/<bucketName>/<folderPrefix>

For example, if a Media Store were set up with the following two entries:

/Volumes/dance-videos/drafts

amazons3://squarebox-test/video-test/dance

Then a clip located at:

/Volumes/dance-videos/drafts/greatest-show/this-is-me

Would be archived:

using the credentials from the 'squarebox-test' account

into the bucket 'video-test'

with a path of '/dance/greatest-show/this-is-me'

The 'folder path' can be set or overridden on archive if 'location' is set as an allowed override. See Manage Service Command / UI TAB.

Archive details pane

The plugin automatically creates a panel entitled "Amazon S3 Archive" containing the clip metadata which describes it's S3 archive state, including:

Field

Description

squarebox.catdv.archive.AmazonS3.serviceType

Type of service responsible for file transfer

squarebox.catdv.archive.AmazonS3.serviceName

Name of service responsible for file transfer

squarebox.catdv.archive.AmazonS3.status

Archive status

squarebox.catdv.archive.AmazonS3.location

Location of file in storage

squarebox.catdv.archive.AmazonS3.restoreLocation

Location file will be (if job pending) or was last restored to

squarebox.catdv.archive.AmazonS3.date

Date (timestamp) of latest change in archive status

squarebox.catdv.archive.AmazonS3.dateLastArchived

Date last archived

squarebox.catdv.archive.AmazonS3.dateLastRestored

Date last restored

squarebox.catdv.archive.AmazonS3.numArchives

The number of times the clip has been successfully archived

squarebox.catdv.archive.AmazonS3.archiveKey

Identifier of file in storage

squarebox.catdv.archive.AmazonS3.batchID

Identifies the batch of files with which the clip was archived

squarebox.catdv.archive.AmazonS3.jobID

Identifier of current / latest archive job

squarebox.catdv.archive.AmazonS3.parentJobID

n/a for Amazon S3 (related to bulk archives)

squarebox.catdv.archive.AmazonS3.userId

ID of user initiating current / latest transfer

squarebox.catdv.archive.AmazonS3.historyJson

Record of all archive activity in json format

squarebox.catdv.archive.AmazonS3.history

Record of all archive activity (including before historyJson added)

squarebox.catdv.archive.AmazonS3.purgeError

Details of purge failure

squarebox.catdv.archive.AmazonS3.accountIdentifier

Identifier of the Amazon S3 Storage Account in CatDV

squarebox.catdv.archive.AmazonS3.regionName

Region of the Amazon S3 archive

squarebox.catdv.archive.AmazonS3.bucketName

Name of bucket to transfer file to / from

Known Issues

(web only) The manage service command on the web has issues with account management due to an unintended server plugin framework change causing the first account row to be highlighted after a clear / delete. Recommend for now only to manage accounts via the desktop client.

Restore can overwrite read only files.

License Code

IMPORTANT: You may install and use this software only in accordance with the terms of the CatDV Server 8 license agreement.

Square Box Systems Ltd.

September 2019

Release notes

2.2.2beta (2019-09-10)

- Retry service start if rest API is not ready

- Modify generation of clip level archive status to incorporate both current and pending status

e.g. 'Archived [Copy running] to ..." etc

- Use queued archive job notifications to expedite processing of newly queued archive jobs

- Send archive job status notifications for all applicable changes in archive job status

- Restore by default to current media file path from clip, rather than from last archived parameters, so that target

restore location can be updated when e.g. media paths are updated with a different volume name (*merged from 1.5.0p2)

2.1.4beta (2019-07-8)

Upgrade

- In the S3 panel, ensure "Amazon S3 Archive History" (identifier squarebox.catdv.archive.AmazonS3.history) is below the new "Amazon S3 History (json)" (identifier squarebox.catdv.archive.AmazonS3.historyjson). Ensure these fields are the last two in the panel as the json history field displays as a table.

Changes

- Permit update of in use accounts via unlock/update/confirm

- Change deletion of in use accounts to to unlock/delete/confirm direct from accounts tab

- Update worker plugin to 2.1.2: add account parameter to restore, as required for snowball support

- Update service fields / panel creation to include all fields in field group but exclude some from the panel

- Add json version of archive history to clip archive metadata

- Fix for UnsupportedOperationException in Collections::UnmodifiableMap.put() on plugin init

- Merged in fix from 1.5.0p1

2.1.0betap3 (2019-04-04)

- Support multi-select for cancelling jobs from queue

- Updated job progress notifications to exclude percentage from Copied / Moved / Restored notifications

- Run another job (if available) immediately after completion / failure of a running job

2.1.0beta (2019-04-02)

Upgrade

- Now require CatDV Server 8.0.1 or later and CatDV Pegasus 13.0 or later (for transfer progress notifications)

- Now need to set an undocumented config property to enable multiple accounts, as it is an add on product

Changes

- Fix accounts tab to work in both web and desktop UI now that issues with tabs within a panel have been resolved

- Send job progress notifications, using new notification system (requires server >= 7.4.2 to make use of them)

- Add dev support for snowball edge - this is also an add on which is enabled via an undocumented config property

- Add capability to override / select the account when copying / moving a file to S3, or restoring a file from S3

- Add capability to delete an account, via flag for deletion combined with save service

- Add config param to enable support for multiple accounts

- Updated version of worker to accommodate new account parameter

- Terminate archive job with no retries if an S3 404 Not Found error occurs

- Include file size in job data / job queue list

- Merged in purge changes from 1.5

- Merged in fixes from 1.4.19

- Merged in fixes from 1.4.18

- Merged in fixes from 1.4.16

2.0.4beta (2019-02-11)

- Fix issue with display of account details tab content on the manage service command in the web UI

- Fix to ensure first account added (rather than migrated from previous installation) is set to be the default

2.0.3beta (2019-02-06)

- Add ability to set default account (purge/restore of pre-migration archives will still use the 'first',

i.e. migrated account)

- Add ability to restrict access to (config and) archive but allow restore

- Display 'Retry' for status of unsuccessful job results if the job is being / will be retried

- Fix NPE listing jobs if user doesn't have permission to access selected clip

2.0.2beta

- Merged in fixes from 1.4.14p4 and 1.4.15

2.0.1beta (2019-01-25)

Upgrade

- The plugin will automatically migrate the connection details from the service definition currently in use (whether in server or standalone) to be the initial 'default' service account. There is a ONE TIME opportunity to customise the identifier of this account by setting the following configuration parameter in the server BEFORE re-starting for the first time after the upgrade, as the details of an account which is 'in use' cannot be changed (NB - The specified account identifier may contain only alpha-numerics and hyphens):

catdv.s3archive_service.migration_account_identifier = <my-account-identifier>

With a single account and no media store service mappings, the plugin will continue to operate as before, using the default account and configured location mapping for all archives. With multiple accounts and corresponding media store service mappings, any clips for which no service mapping is found will fall back to use the configured location mapping, as before.

- The plugin will automatically migrate any optional config values from the server properties to the service definition and they should subsequently be edited using the web or desktop Manage Service UI. You should be able to confirm that these have been set appropriately by going into the UI and comparing the values to the ones in the server config. For a standalone service, any additional properties set only in the service config will need to be either copied into the server config before starting the server with the new plugin for the first time OR set manually in the UI.

- After this migration, all optional service config properties should be deleted from the config for the server and (if applicable) service control panel. The following properties are not optional and should not be removed if present:

catdv.s3archive_service.licence_code

catdv.s3archive_service.service_mode

catdv.s3archive_service.debug

Changes (MAJOR UPDATE)

- Add support for multiple archive accounts

- Enhancements to manage service command, enabling most configuration to be done via the UI

- Support for automatic mapping from clips to archive locations, via Media Store service mappings which include

a service identifier, account identifier, container name and (optional) 'folder'

e.g. amazons3://squarebox-test-2/vcvideotest/dance

1.5.0p2 (2019-07-09)

- Restore by default to current media file path from clip, rather than from last archived parameters, so that target restore location can be updated when e.g. media paths are updated with a different volume name

1.5.0p1 (2019-06-06)

- Fix NPE on cancel job, when associated clip is not found

1.5 (2019-03-28)

Upgrade

- run catdvarchiveplugins1.5.sql against the CatDV DB to fix the field type for archive date fields

- update worker plugin if applicable

Changes

- Changes to purge (these apply to move to archive as well as direct purge): clear date last restored, log 'File purged' or 'Purge failed' to clip archive params history, add purge error to clip archive params, add option of "Yes to all" confirmation for purging multiple altered files.

1.4.19 (2019-03-22)

- Fix blocking mechanism for ensuring that multiple processes cannot update a job's status simultaneously

- Terminate restore job with no retries if an S3 404 Not Found error occurs

1.4.18 (2019-03-06)

- prevent spurious 'Network outage'/NPE error when no restore directory can be extracted from the restore location

- update thread handling when processing jobs, to ensure a completed job will not be retried before it's status has been fully updated

1.4.16 (2019-02-06)

- Display 'Retry' for status of unsuccessful job results if the job is being / will be retried

- Fix NPE listing jobs if user doesn't have permission to access selected clip

1.4.15 (2019-01-31)

- Improve performance of Job Queue command for large lists of jobs

1.4.14p4 (2019-01-28)

- Fix to cause job creation to fail if no source media are updated with the archive details for the job

1.4.14p3 (2019-01-25)

- Fix for issue of jobs stuck in running / stalled state

- Trim trailing path separators from archive / restore location overrides

1.4.14p2 (2019-01-20)

- Fix to ensure plugin calls server rest API as /catdv/api/... instead of /api/... to prevent failed calls due to redirection of POST requests in some environments

1.4.14p1 (2019-01-11)

- Update README with correct version of required worker plugin

1.4.14 (2018-11-13)

- Add plugin command that can be triggered via the rest API to generate clip data for a file archived outside CatDV

- Add config param 'catdv.s3archive_service.max_retry_delay' to limit the delay period between retries of waiting jobs.

1.4.12

- Add config param 'catdv.s3archive_service.purge_directories' to turn off purging of directories when moving / purging files.

- Update README with note on Glacier support

- Update README to cover installing and updating the archive worker plugin, now included as part of the plugin installation.

1.4.11

Upgrade

- If the 'catdv.s3archive_service.allow_override' server property is explicitly set to include

'location:Location:archive', modify it to 'archiveLocation:Location:archive'

Changes

- Add option to restore files to a specified location / directory (can be enabled as an override for restore)

1.4.9 (2018-09-14)

- Fix for error message on cancel job contains NullPointerException

- Fix for issue which causes a new FieldGroup for an archive plugin to be created each time the server is restarted

- Fixed bug causing purge commands to fail from the web UI

- Improve the error reporting when attempting to archive offline files

1.4.8betap3 (2018-07-11)

- Fix build / jar files

1.4.8betap2 (2018-07-03)

- Fix location override for archives triggered from worker

- Log start / end of transfer jobs outside debug mode

- Record archive failure in clip archive status if clip has never been archived

- Update standalone service instructions in README

(** above changes merged from version 1.4.3p1)

- Integrate with ServicePlugin framework

(** above change merged from version 1.4.3)

- Additional location mapping option to determine location on archive from Media Store paths, includes mediaStoreRelative and mediaStoreAbsolute.

- Fix to ensure that location on archive does not contain '\' separators, regardless of location mapping in use

(** above changes merged from patches 1.4.3betap3 and 1.4.3betap4)

- Manage service command: preserve new connection details even if they are not valid / updated, to ease entry

- Manage service command: obfuscate the secret key value

- Rename all packages written for plugins from squarebox.catdv to squarebox.plugins

- Fix for NPE when clip has media file but no metadata (introduced by fix in 1.4.5)

- Fix to eliminate spurious exception starting / stopping standalone service

1.4.5 (2018-05-03)

- Fix for rest.NotFoundException scheduling jobs: change method for checking job exists, to avoid server incompatibility

1.4.5beta

- Move packages in order to split out re-usable plugin SDK classes

- Fix to prevent NPE - on archiving / restoring, skip clips that have no source media (i.e. sequences)

- Fix for location override in WorkerBackupCommand

1.4.4beta (2018-03-29)

- Upgraded httpclient and httpcore libraries (required to support other archive plugin)

1.4.3p1

- Fix location override for archives triggered from worker

- Log start / end of transfer jobs outside debug mode

- Record archive failure in clip archive status if clip has never been archived

- Update standalone service instructions in README

1.4.3 (2018-06-08)

- Integrate with ServicePlugin framework

1.4.3betap4 (2018-05-02)

- Fix to ensure that location on archive does not contain '\' separators, regardless of location mapping in use

- Updated location mapping options to add mediaStoreRelative and mediaStoreAbsolute

1.4.3betap3 (2018-04-19)

- Additional location mapping option to determine location on archive from Media Store paths

1.4.3betap2 (2018-03)

- Fix to truncate job status before passing to Rest API

1.4.3betap1 (2018-03)

- Removed obsolete duplicate aws library

- Additional debug output for KMS encryption

1.4.3beta (2018-02)

- Add capability to use either client or server side KMS encryption. This is set via the config param 'catdv.s3archive_service.use_kms_encryption', which should now be set to 'client', 'server' or left unset for no encryption. For backwards compatibility, a value of 'true' sets server side encryption and a value of false no encryption.

- Add config param 'catdv.s3archive_service.allow_override' to enable override of glacier tier and archive location from the UI.

- Add config param 'catdv.s3archive_service.location_mapping' to enable batching of file archive locations to be turned off - i.e. subsequent archives replace the existing file(s), rather than creating new 'date / time stamped' copies.

- Fix to ensure server config values that affect the UI are picked up on startup when archive service is standalone.

- Fix exponential back off timing for archive job retries

- Add date last restored as an archive parameter

1.4.2betap1 (2018-01-22)

- Fix for archive service start

1.4.2beta (2018-01-09)

Upgrade

- run catdvarchiveplugins1.4.2.sql against the CatDV DB to rename a job data field and to update the textual

job and clip archive status values for consistency

Changes

- Add capability to run plugin archive service standalone, via config param catdv.s3archive_service.service_mode

- Add capability to restore from Glacier deep storage. If the object being restored is in deep storage, a request is made to transfer the object from deep storage to S3, where it will be available for a specified number of days (2 days by default, can be configured via catdv.s3archive_service.glacier_restore_expiration). The archive service will attempt to periodically retry the restore job until the object is available on S3 and can be fully restored or the maximum number of retries has been attempted.

- Add capability to use AWS KMS to encrypt objects in transit to / from and at rest on Amazon S3. This requires AWS KMS to be set up so that the SDK can get the Customer Master Key(CMK) for encryption. Set the new config param 'catdv.s3archive_service.use_kms_encryption' to 'true' to turn KMS encryption on and ensure that a CMK ID is provided via the Manage service command.

- Provision for providing more detailed status information for jobs and (archiving) files

- Add detailed job status and job data to Job Details pane in the Service Job Queue UI

- Fix for archive failure due to special characters in the file path / name (such as ™), with an error message of "The request signature we calculated does not match the signature you provided". The original file path in the object metadata is now url encoded on archive and decoded on metadata retrieval.

- New config param 'catdv.s3archive_service.max_jobs_to_display' for configuring the maximum number of jobs which will be listed in the job queue. This overrides days_to_display and defaults to 1000

- Increased default value for 'catdv.s3archive_service.loop_delay' to 30 seconds

- Increased default value for 'catdv.s3archive_service.concurrent_transfer_limit' to 4

- Increased default value for 'catdv.s3archive_service.days_to_display' to 10

- Updated Amazon AWS for Java library to version 1.11.221

1.4.1betap6 (2017-12-22)

- Fixes for worker plugin commands

1.4.1betap5 (2017-12-20)

- Return json response message from WorkerPurgeCommand

- Include metaclipID where applicable in error details for worker backup / restore / purge commands

1.4.1betap4 (2017-12-12)

- Fix bulk backup and restore commands to capture and report unexpected errors queuing child jobs

- Update worker backup and restore commands to return a JSON representation of the archive result in the command response.

1.4.1betap3 (2017-12-01)

- Fix to ignore clips with duplicate mapped file paths, before processing to queue jobs

1.4.1betap2

- New hidden versions of backup (copy/move) and purge commands for use by worker

1.4.1betap1 (2017-11-16)

- Fix to ensure archive metadata for source media with matching media paths are updated simultaneously

- Additional service logging, new config param 'catdv.s3archive_service.debug' now turns on/off most logging after initial plugin startup

1.4.1beta (2017-06-09)

Upgrade

- remove 'catdv.rest_api.*' config properties in Server Config on CatDV control panel

- run catdvarchiveplugins1.3.5.sql against the CatDV DB to fix the field type for archive date fields

Changes

- major update to job queue command UI, utilising new features provided by version 3 of plugin framework

- make API calls in process (inside server) from plugin commands and from archive service when running inside server (depends on version 3 of plugin framework)

1.3.5beta (2017-06-09)

Upgrade

- change config property 'catdv.rest_api.client_key' to 'catdv.s3archive_service.licence_code' in Server Config on CatDV control panel

- run catdvarchiveplugins1.3.5.sql against the CatDV DB to fix the field type for archive date fields

Changes

- improve job processing loop so that job priorities are always respected: only take one job from the queue at a time, when waiting job delays have elapsed re-queue jobs rather than processing waiting jobs directly (note re-queue jobs of equal priority will be processed before newer jobs, as the lowest job id will be processed first)

- ensure that transfers to S3 never block the job processing loop, keeping the connection status up to date

- improve processing of multiple concurrent transfers plus config param max_jobs_running_delay for optimisation

- improve handling when S3 is unavailable: after about 30s the service status updates to "Running (offline)"

- restart orphaned "in progress" jobs (e.g. from a server restart) more quickly

- switch to using statusCode as primary status value (except job queries which requires latest server)

- add support for pausing a service (processing transfers)

- improve plugin licence handling

- update to version 1.11.127 of Amazon AWS Java SDK

- script to fix archive date fields in desktop client for existing installations

- fix NPE when clip has no metadata

1.3.4beta3 (2017-03-02)

- Option to specify destination location 'path' when copying / moving files to Amazon S3 from the worker

- Add config param "catdv.s3archive_service.restrict_command_access" which can be used to hide 'all' plugin commands or 'config' commands (currently Manage Service). See README for details.

- fix archive date fields in desktop client for new installations

1.3.4beta2 (2017-03-02)

- Option to enter bucket name when copying / moving files to Amazon S3. The bucket name on the Manage Service screen is now only the default value filled in the first time a user attempts to copy or move files. Once the user has entered a bucket, a subsequent move / copy will default to the last value entered. The bucket warning and "Apply bucket to queued jobs" checkbox are no longer on the Manage Service screen as changes there now only affect the default bucket for a new user of the plugin.

- to support HGST/StrongBox archives, add new config param catdv.s3archive_service.signer_override and upgrade to aws-java-sdk-1.11.52 (along with dependencies: joda-time-2.9.5.jar, httpcore-4.4.5.jar, httpclient-4.5.2.jar)

1.3.3 (2016-12-07)

Upgrade

- generate a rest api licence code registered to the customer for client 'ARCS3' and add to the configured properties:

catdv.rest_api.client_key=<generated_licence_code>

- run catdvs3plugin1.3.3.sql against the CatDV DB to set status codes for existing Jobs

- check in web admin that there are not two Amazon S3 archive panels

- if there are, delete the one that is not referenced by the data field of the service table entry for 'AmazonS3'

Changes

- use a separate license pool for each archive plugin's access to the Rest API

- fixes for archiving of complex media

- complex media archive status reflects the archive status of it's contents (clip media 'Archive Details' field

now updated along with the archive values stored in clip media metadata)

- rename package squarebox.util to squarebox.archiveplugin.util to avoid name clashes in the server

- add more detail to successful output for job results

- change location of files on S3 archive: <batch id>/<actual file path>

- the batch id is the date/time that a selection of files was scheduled for archive to S3

- ** this means files on the archive will no longer be overwritten, but may have multiple copies

- propagate changes to archive metadata to all source media with the same value for mediaPath

- set status code on archive Jobs along with text status

- purge empty directories along with files when using the 'Schedule move... ' or 'Purge files...' commands

- the top two directory levels are always preserved even if empty, e.g. '/volumes/media/'

- automatically create field definition(s) for any new archive metadata and add them to the plugin's archive panel

- throw a meaningful error if any of the properties required by the RestAPI are not set

1.3.2p1 (2016-09-21)

- patch for NullPointerException when scheduling transfers for clips with no existing metadata

1.3.2 (2016-09-08)

Upgrade

- Run catdvs3plugin1.3.2.sql against the CatDV DB to update the Job types

Changes

- fix for archiving mapped paths

- changed plugin command names and job types for clarity

1.3.1 (2016-08-10)

Initial production version

** If you previously had a beta version installed:

- any existing archive jobs should be removed from the database

- any manually created archive panel should be removed

- any existing archive data will be ignored (field names now prefixed with service type)

Copyright © Square Box Systems Ltd. 2002-2019. All rights reserved.