Squarebox

This plugin supports the transfer of files between CatDV and Amazon S3, (simple storage service). In addition, it supports the restore of files that were automatically transferred to Glacier cold storage from S3 via bucket lifecycle policies.

This S3 plugin does NOT support the direct transfer of files between CatDV and a Glacier Vault. Glacier vaults are a separate type of storage which is entirely independent of S3 and would need to be handled by a different plugin.

Before you start

In order to use this Amazon S3 archiving plugin you need:

CatDV Server 7.3 or later

CatDV Pegasus 12.0b15 or later

CatDV Plugin licence 'ARCS3' (Rest API licence with multiple sessions)

To trigger Amazon S3 cloud file transfers from the Worker you need:

S3 Worker Plugin 1.5, included in this installation as AmazonS3Worker1.5.catdv

In addition, to run the archive service standalone (outside the server):

CatDV Service Control Panel 1.2 or later

Installation

NEW

Copy the whole directory extracted from this zip to the plugins directory:

e.g. on unix systems: /usr/local/catdvServer/plugins

e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\plugins

Set optional server properties for the plugin, if desired:

Open 'Server Config' from the control panel

Enter required properties into the CatDV Server tab, 'Other' field. See 'Plugin server properties' below and 'Running archive service standalone', if applicable and 'Using KMS encryption', if applicable.

Enter optional properties if desired.

(** if S3 plugin is being used as a bridge for a 3rd party service, e.g. Dell ECS, ensure that the server properties initial_name and initial_description are set appropriately). See 'Plugin server properties' below.

Restart the CatDV server from the control panel

Open the CatDV client (or log the client back in to the server)

Configure the parameters to access Amazon S3: In the client, run Tools->Manage Amazon S3 Archive Service, enter region/endpoint, access key, secret key and bucket name and click 'save' or 'save and start'. If the values entered are valid and a connection to Amazon S3 can be established, then when the service mode is 'InPlugin' the archive service will be started automatically. The service status can be monitored on the management and job queue screens.

* NB - If you don't already have an Amazon S3 account or you don't know the access keys see 'Amazon S3 setup' for details of how to set them up.

If running the archive service standalone (configured via a server property in step two), then configure and start the archive service via the service control panel. See 'Running archive service standalone'.

Verify the service setup: In the client, run Tools->Manage Amazon S3 Archive Service again. The service status should be 'Running (online)'. The status may be 'Running (offline)' if Amazon S3 is not currently accessible.

Install (or delete) the Amazon S3 worker plugin:

IF the Worker IS NOT installed: Delete the Amazon S3 worker plugin file with extension '.catdv' from the Amazon S3 plugin directory (see step 1).

IF the Worker IS installed:

Move the Amazon S3 worker plugin file with extension '.catdv' FROM the Amazon S3 plugin directory installed in step 1 TO the worker plugins directory:

e.g. on a Mac: /Library/Application Support/Square Box/Extensions

e.g. on Windows: %ProgramData%\Square Box\Extensions

e.g. on Linux: /usr/local/catdvWorker/extensions

If the worker is running, re-connect to the server (Edit Config / CatDV Server / Reconnect) so that the archive plugin fields are available.

Verify that the Amazon S3 worker actions are available. **It may be necessary to quit and re-open the worker to pick up the new worker actions.

UPDATE

Make a note of the current plugin version - from AmazonS3_README.txt / AmazonS3_ReleaseNotes.txt in the plugin directory or from the server log at start up when the plugin is loaded

If running the archive service standalone, use the service control panel to stop the service.

Stop the CatDV server from the control panel

Copy the whole directory extracted from this zip to the plugins directory:

e.g. on unix systems: /usr/local/catdvServer/plugins

e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\plugins

Remove or move the directory / files for the prior plugin from the plugins directory.

Carry out any 'Upgrade' instructions for each plugin version listed in the AmazonS3_ReleaseNotes.txt (from this zip) above the last installed version, working back from the oldest applicable version

Read through the 'Changes' for each plugin version listed in the AmazonS3_ReleaseNotes.txt (from this zip) above the last installed version, to check whether any new config params have been added which could usefully be customised for this installation

Start the CatDV server from the control panel

If running the archive service standalone (configured via a server property in step two), then update and start the archive service via the service control panel. See 'Running archive service standalone'.

Open the CatDV client (or log the client back in to the server)

Verify the service setup: In the client, run Tools->Manage Amazon S3 Archive Service again. The service status should be 'Running (online)'. The status may be 'Running (offline)' if Amazon S3 is not currently accessible.

Update (or delete) the Amazon S3 worker plugin by following the instructions from step 8 for a new installation, making sure to move / remove the .catdv file for the prior version of the plugin from the extensions directory.

Running archive service standalone

By default, the service that handles archive jobs runs inside the plugin. To run it as a standalone process, ensure that the service mode property is set to Standalone in the server when installing the plugin (see step 2 of Installation):

catdv.s3archive_service.service_mode = Standalone

The following is required to run the plugin's archive service on a separate machine from the CatDV server:

- CatDV Service Control Panel 1.1.1 or later

** NB The archive service connection must be configured in the client / web plugin UI before it can be started.

To configure and start the standalone archive service using the service control panel:

Install the CatDV Service Control Panel

Open the CatDV Service Control Panel

Click 'Service Config' to open the service configuration

Enter license details on the Licensing tab

Enter required and optional service properties on the Settings tab. See 'Plugin server properties' below.

On the Installation tab, note the install location ('Server Install Path')

On the file system, navigate to the install location and copy the plugin files into the plugins sub-directory.

Click 'Start' to start the archive service.

To update the standalone archive service:

* The service should have been stopped via the Service Control Panel 'Stop' prior to stopping the CatDV Server.

Open the CatDV Service Control Panel

Add / edit service properties on the Settings tab if applicable.

Go to the install location (see Installation tab) and replace the plugin files in the plugins sub-directory with those for the latest version of the plugin.

Click 'Start' to start the archive service.

Using KMS Encryption

The Amazon S3 plugin supports the use of KMS encryption to protect your data either at rest on S3 (server) OR both in transit to/from S3 and at rest on S3 (client). If you turn on server encryption, the data will be encrypted and decrypted by and on S3. If you turn on client KMS encryption, the plugin will use the Amazon S3 encryption client to interact with S3, rather than the standard client, and all data will be encrypted in the plugin before it is sent to S3 and decrypted in the plugin when it is retrieved from S3.

For more details of Amazon's implementation, see "Using the Amazon S3 Encryption Client" in:

https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

*NB - The support for KMS encryption only works with Amazon S3, not with S3 compatible interfaces to other cloud services.

To use KMS encryption:

Stop the CatDV server if already running.

Set the following server property (this will take effect when the server is started):

catdv.s3archive_service.use_kms_encryption = client

OR

catdv.s3archive_service.use_kms_encryption = server

*NB - This is a global setting, not per file, and should not be changed once files have been archived, otherwise subsequent restores will not decrypt those files.

For 'client' client encryptuon, install JCE (Java Cryptography Extension) unlimited strength policy files

Download the files from http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Unzip download

Copy the two jar files from the unzipped directory to jre/lib/security for the plugin installation:

For archive service run 'InPlugin' (default):

e.g. on unix systems: /usr/local/catdvServer/jre/lib/security/

e.g. on Windows: C:\Program Files (x86)\Square Box\CatDV Server\jre\lib\security

For archive service run 'Standalone':

e.g. on unix systems: <path to your jre>/jre/lib/security/

e.g. on Windows: <path to your jre>\jre\lib\security

Start server

Plugin server properties

When running the archive service "InPlugin", the following server properties must be set:

catdv.s3archive_service.licence_code = <generated licence code for plugin>

When running the archive service standalone, the following server properties must ALSO be set:

catdv.s3archive_service.service_mode = Standalone

When running the archive service standalone, the properties for the archive service must include:

catdv.s3archive_service.licence_code = <generated licence code for plugin>

catdv.s3archive_service.service_mode = Standalone

catdv.rest_api.host = <CatDV server hostname>

catdv.rest_api.port = <CatDV API port>

catdv.rest_api.user = <CatDV API user>

catdv.rest_api.password = <CatDV API user password> (** exclude if blank, for dev environment)

Typical rest API values for a development machine would be (password property omitted for a blank password):

localhost, 8080, Administrator

In addition, the following optional properties can also be set:

catdv.service.api_timeout = <time_period_in_milliseconds>

Determines the timeout period after which the current connection to the CatDV API will be assumed to have timed out and therefore be replaced with a new connection (this is to pre-empt exceptions due to timeouts).

catdv.s3archive_service.service_mode = InPlugin / Standalone

Determines whether the job processing service for the plugin runs inside the plugin or as a standalone process outside the CatDV server. The default value is "InPlugin".

catdv.s3archive_service.restrict_command_access = NOT SET / config / all

If set, restricts the specified plugin commands to sys admin users. Can be used to hide 'config' commands only (i.e. Manage Service), or 'all' commands.

NB - this only works with CatDV client versions from 12 onwards.

catdv.s3archive_service.allow_override

Enables the facility to override one or more parameters at the point of archive or restore. The comma-separated string value may include:

archiveLocation:Location:archive

restoreLocation:Location:restore

aws.bucket_name:Bucket name:archive

aws.glacier_tier:Glacier tier:restore

The middle item is the UI field label and may be modified. The default value is "aws.bucket_name:Bucket name:archive" Set to 'none' to allow no overrides.

catdv.s3archive_service.location_mapping

Method for generating the archive file location on S3. Valid values are:

batchMirror: The file(s) selected for archive are batched together in a date and time stamped directory (format /yyyy-mm-dd/hh:mm:ss.mmm) and mirror their src file path(s) within the batch directory. This has the effect of versioning uploads to S3 as a new copy is written for each transfer.

mirror: The archive location always mirrors the src file path which means a file is replaced each time it is archived. **NB The downside of this approach is that it does not cater for files with the same path on different file systems.

mediaStore:

mediaStoreRelative: Generates relative file paths from the relevant media store(s). If no matching media store path is found, the archive location is generated using the 'mirror' approach.

mediaStoreAbsolute: As 'mediaStoreRelative' but also prepends the path with the name of the media store.

The default is 'batchMirror'.

catdv.s3archive_service.days_to_display = <number_of_days>

The number of days into the past for which jobs are listed in the job queue.

Any job updated in this time period is listed. The default value is 10.

catdv.s3archive_service.max_jobs_to_display = <number_of_jobs>

The maximum number of jobs which will be listed in the job queue. This

overrides days_to_display. The default value is 1000.

catdv.s3archive_service.loop_delay = <time_period_in_milliseconds>

Determines the frequency with which the archive service checks the Job queue and attempts to run outstanding Jobs. The default value is 30000, equivalent to 30 seconds.

catdv.s3archive_service.max_jobs_running_delay = <time_period_in_milliseconds>

Determines the time period to wait before attempting to run another job when the maximum number of transfers are already running. The default value is 0. This would typically be increased (either when a typical transfer is likely to be slow or when the concurrent transfer limit is greater than 1) in order to reduce hits to the database to query jobs whilst max transfers are running. Default value is 0.

catdv.s3archive_service.retry_in_progress_job_delay = <time_period_in_milliseconds>

Determines the time period after which a Job which is running will be restarted if it has not been updated during that period. Defaults to a value equivalent to 1 hour.

catdv.s3archive_service.num_delayed_job_retries

The number of times a waiting job is retried at an interval which increases exponentially. The default value is 10, which is equivalent to roughly 34 hours elapsed running time of the archive service.

catdv.s3archive_service.max_retry_delay = <time_period_in_milliseconds>

Limits the maximum time period between retries of waiting jobs. The default value is 0, which represents no maximum value.

catdv.s3archive_service.concurrent_transfer_limit

Determines the number of transfers that the Amazon S3 transfer manager will attempt to run concurrently. The default value is 4.

catdv.s3archive_service.host_addressing

Determines whether the plugin uses virtual host addressing or resource path addressing to connect with the S3 service. Valid values are 'virtual' or 'pathStyle'. The default value is 'virtual'.

catdv.s3archive_service.signer_override

Sets the name of the signature algorithm to use for signing requests. If not set or explicitly set to null, the Amazon S3 SDK will choose a signature algorithm to use based on a configuration file of supported signature algorithms for the service and region. The default is not set, which should ideally not be overridden for the Amazon S3 service. However, other archive services which provide an S3 interface may need to set this property, e.g "S3SignerType".

catdv.s3archive_service.use_kms_encryption

Determines whether to use kms encryption and in what capacity. Valid values are 'client' or 'server' (for backwards compatibility 'true' is equivalent to 'client' and 'false' is equivalent to not set). The default is not set. The 'client' option encrypts your data locally to ensure its security as it passes to the Amazon S3 service, where it remains encrypted. The S3 service receives your encrypted data and does not play a role in encrypting or decrypting it. The 'server' option enables data to be encrypted in storage on S3 but not in transit. The S3 server encrypts and decrypts the data.

*NB - for 'client', a customer master key (CMK) ID for encryption must be provided via the Manage service command and the user associated with the S3 credentials must have permission to use the CMK for encryption and decryption.

*NB - for 'server', a kms key ID may optionally be provided via the manage service command (otherwise the default kms key will be used).

catdv.s3archive_service.glacier_restore_expiration

The expiration period, as a number of days, for which a file restored from Glacier deep storage will be accessible directly from S3 (restore from Glacier is always temporary)

catdv.s3archive_service.initial_name = The name used to initially create the 'in_server' service.

The default value is "Amazon S3". This property may be used to customise the name for an S3 compatible service such as Google Cloud Storage.

catdv.s3archive_service.initial_description

The description use to initially create the 'in server' service. The default value is "Archive to and restore from Amazon S3 cloud storage". This property may be used to customise the description for an compatible service such as Google Cloud Storage.

Amazon S3 Setup / Configuration values

In order to use the plugin, an Amazon AWS account is required. If you need to set up an Amazon AWS account for testing purposes, follow the link 'Try Amazon S3 for free' from http://aws.amazon.com/s3/. Then the most straightforward way to proceed is to use an existing Amazon account.

NB - The account is only free for *limited usage for a one year period*: "As part of the AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 standard storage, 20,000 Get Requests, 2,000 Put Requests, and 15GB of data transfer out each month for one year. You pay the rates on the pricing page for usage over and above the free monthly limits for the first year and for any subsequent usage."

Once you are logged into an AWS account, http://aws.amazon.com/s3/ will take you to the S3 console.

In order for the plugin to copy to / move to / restore from your AWS account, you will need to set up a user with access keys and at least one bucket:

To set up a user:

Open the AWS Identity and Access Management Console

https://console.aws.amazon.com/iam/home#home

Follow the steps under security status to create a User AND create a Group containing that user which has the permission AmazonS3FullAccess.

To get access keys for a user:

Go to https://console.aws.amazon.com/iam/home?#users

Click on the user name

Click on the ‘Security Credentials’ tab

Click ‘Create Access Key'

Either Click 'Download Credentials' or click 'Show User Security Credentials' and copy the key values.

*NB These are the access key and secret key required to configure the plugin.

To create a bucket:

Go to http://aws.amazon.com/s3/

Click 'Create Bucket'

Enter bucket name and click 'Create'

*NB This is the bucket name required to configure the plugin. The value forthe region can be seen in the url when viewing the bucket. This must be a region code from the following list (default is us-east-1):

US East (N. Virginia) / US Standard: us-east-1

US West (Oregon): us-west-2

US West (N. California): us-west-1

EU (Ireland): eu-west-1

EU (Frankfurt): eu-central-1

Asia Pacific (Singapore): ap-southeast-1

Asia Pacific (Tokyo): ap-northeast-1

Asia Pacific (Sydney): ap-southeast-2

Asia Pacific (Seoul): ap-northeast-2

South America (Sao Paulo): sa-east-1

The values required to configure the service in the Amazon S3 plugin are:

region/endpoint - region code (above) OR endpoint for an S3 compatible archive

access key - access key from S3 credentials of Amazon S3 user

secret key - secret key from S3 credentials of Amazon S3 user

default bucket - name of an accessible bucket on Amazon S3 which can be used to verify the connection, used as the default bucket the first time a user attempts to copy / move files to Amazon S3

CMK ID - customer master key for authorising client side KMS encryption, if applicable

KMS Key ID (optional) - KMS key ID for authorising server side KMS encryption, if applicable

Using the plugin

The plugin comprises various commands that are available from the Tools menu in the client.

File transfers are initiated by the schedule backup / archive / restore commands but are carried out by a separate process. This means that the client does not automatically reflect updates to a clip on completion of a transfer. Use 'Server->Refresh Window' in the client to pick up any changes during / after transfer.

The plugin includes the following commands:

* Manage Amazon S3 Archive Service - View full status details of the archive service and enter / update the configuration values required to access the S3 archive. These include region / endpoint, access key, secret key, default bucket name and optional CMK ID. If a connection to Amazon S3 cannot be made an error is displayed and the values are not saved.

* View Amazon S3 Archive Service Job Queue - list of jobs that can be filtered by status (Pending, Complete, Failed, All). It provides the capability to view job contents, view job results and cancel waiting or queued jobs.

* Schedule copy to Amazon S3 - adds a copy job for each clip selected (if multiple clips reference the same source media only one job is created). Copy jobs can be scheduled when the archive service is not running or is offline but will only be run when the archive service is online (running and Amazon S3 is accessible). When the copy job is run, the source media associated with the clip is copied from the file system to Amazon S3 and the original file is preserved.

* Schedule move to Amazon S3 - as copy but on successful transfer of the file to storage the job attempts to delete the original file. If for some reason the deletion fails this does not affect the job status (or archive status of the clip) but "Purge failed" is recorded in the job data.

* Schedule restore from Amazon S3 - adds a restore job for each clip selected (if multiple clips reference the same source media only one job is created). Restores can be scheduled when the archive service is not running or is offline but will only be run when the archive service is online (running and Amazon S3 is accessible). When the restore job is run, the source media associated with the clip is copied from Amazon S3 to the file system.

* Purge files archived to Amazon S3 - deletes the source media associated with the selected clips if they have been successfully archived.

Archive details pane

The plugin automatically creates a panel entitled "Amazon S3 Archive" containing the clip metadata which describes it's S3 archive state, including:

squarebox.catdv.archive.AmazonS3.serviceType - Type of service responsible for file transfer

squarebox.catdv.archive.AmazonS3.serviceName - Name of service responsible for file transfer

squarebox.catdv.archive.AmazonS3.status - Archive status

squarebox.catdv.archive.AmazonS3.location - Location of file in storage

squarebox.catdv.archive.AmazonS3.date - Date (timestamp) of latest change in archive status

squarebox.catdv.archive.AmazonS3.dateLastArchived - Date last archived

squarebox.catdv.archive.AmazonS3.numArchives - The number of times the clip has been successfully archived

squarebox.catdv.archive.AmazonS3.archiveKey - Identifier of file in storage

squarebox.catdv.archive.AmazonS3.batchID - Identifies the batch of files with which the clip was archived

squarebox.catdv.archive.AmazonS3.jobID - Identifier of current / latest archive job

squarebox.catdv.archive.AmazonS3.parentJobID - n/a for Amazon S3 (related to bulk archives)

squarebox.catdv.archive.AmazonS3.userId - ID of user initiating current / latest transfer

squarebox.catdv.archive.AmazonS3.history - Record of all archive activity

squarebox.catdv.archive.AmazonS3.regionName - Region of the Amazon S3 archive

squarebox.catdv.archive.AmazonS3.bucketName - Name of bucket to transfer file to / from

Known Issues

Restore can overwrite read only files.

License Code

IMPORTANT: You may install and use this software only in accordance with the terms of the CatDV Server 7.1.1 license agreement.

Square Box Systems Ltd.

October 2017

Release notes

1.5 (2019-03-28)

Upgrade

- run catdvarchiveplugins1.5.sql against the CatDV DB to fix the field type for archive date fields

- update worker plugin if applicable

Changes

- Changes to purge (these apply to move to archive as well as direct purge): clear date last restored, log 'File purged' or 'Purge failed' to clip archive params history, add purge error to clip archive params, add option of "Yes to all" confirmation for purging multiple altered files.

1.4.19 (2019-03-22)

- Fix blocking mechanism for ensuring that multiple processes cannot update a job's status simultaneously

- Terminate restore job with no retries if an S3 404 Not Found error occurs

1.4.18 (2019-03-06)

- prevent spurious 'Network outage'/NPE error when no restore directory can be extracted from the restore location

- update thread handling when processing jobs, to ensure a completed job will not be retried before it's status has been fully updated

1.4.16 (2019-02-06)

- Display 'Retry' for status of unsuccessful job results if the job is being / will be retried

- Fix NPE listing jobs if user doesn't have permission to access selected clip

1.4.15 (2019-01-31)

- Improve performance of Job Queue command for large lists of jobs

1.4.14p4 (2019-01-28)

- Fix to cause job creation to fail if no source media are updated with the archive details for the job

1.4.14p3 (2019-01-25)

- Fix for issue of jobs stuck in running / stalled state

- Trim trailing path separators from archive / restore location overrides

1.4.14p2 (2019-01-20)

- Fix to ensure plugin calls server rest API as /catdv/api/... instead of /api/... to prevent failed calls due to redirection of POST requests in some environments

1.4.14p1 (2019-01-11)

- Update README with correct version of required worker plugin

1.4.14 (2018-11-13)

- Add plugin command that can be triggered via the rest API to generate clip data for a file archived outside CatDV

- Add config param 'catdv.s3archive_service.max_retry_delay' to limit the delay period between retries of waiting jobs.

1.4.12

- Add config param 'catdv.s3archive_service.purge_directories' to turn off purging of directories when moving / purging files.

- Update README with note on Glacier support

- Update README to cover installing and updating the archive worker plugin, now included as part of the plugin installation.

1.4.11

Upgrade

- If the 'catdv.s3archive_service.allow_override' server property is explicitly set to include

'location:Location:archive', modify it to 'archiveLocation:Location:archive'

Changes

- Add option to restore files to a specified location / directory (can be enabled as an override for restore)

1.4.9 (2018-09-14)

- Fix for error message on cancel job contains NullPointerException

- Fix for issue which causes a new FieldGroup for an archive plugin to be created each time the server is restarted

- Fixed bug causing purge commands to fail from the web UI

- Improve the error reporting when attempting to archive offline files

1.4.8betap3 (2018-07-11)

- Fix build / jar files

1.4.8betap2 (2018-07-03)

- Fix location override for archives triggered from worker

- Log start / end of transfer jobs outside debug mode

- Record archive failure in clip archive status if clip has never been archived

- Update standalone service instructions in README

(** above changes merged from version 1.4.3p1)

- Integrate with ServicePlugin framework

(** above change merged from version 1.4.3)

- Additional location mapping option to determine location on archive from Media Store paths, includes mediaStoreRelative and mediaStoreAbsolute.

- Fix to ensure that location on archive does not contain '\' separators, regardless of location mapping in use

(** above changes merged from patches 1.4.3betap3 and 1.4.3betap4)

- Manage service command: preserve new connection details even if they are not valid / updated, to ease entry

- Manage service command: obfuscate the secret key value

- Rename all packages written for plugins from squarebox.catdv to squarebox.plugins

- Fix for NPE when clip has media file but no metadata (introduced by fix in 1.4.5)

- Fix to eliminate spurious exception starting / stopping standalone service

1.4.5 (2018-05-03)

- Fix for rest.NotFoundException scheduling jobs: change method for checking job exists, to avoid server incompatibility

1.4.5beta

- Move packages in order to split out re-usable plugin SDK classes

- Fix to prevent NPE - on archiving / restoring, skip clips that have no source media (i.e. sequences)

- Fix for location override in WorkerBackupCommand

1.4.4beta (2018-03-29)

- Upgraded httpclient and httpcore libraries (required to support other archive plugin)

1.4.3p1 (TODO)

- Fix location override for archives triggered from worker

- Log start / end of transfer jobs outside debug mode

- Record archive failure in clip archive status if clip has never been archived

- Update standalone service instructions in README

1.4.3 (2018-06-08)

- Integrate with ServicePlugin framework

1.4.3betap4 (2018-05-02)

- Fix to ensure that location on archive does not contain '\' separators, regardless of location mapping in use

- Updated location mapping options to add mediaStoreRelative and mediaStoreAbsolute

1.4.3betap3 (2018-04-19)

- Additional location mapping option to determine location on archive from Media Store paths

1.4.3betap2 (2018-03)

- Fix to truncate job status before passing to Rest API

1.4.3betap1 (2018-03)

- Removed obsolete duplicate aws library

- Additional debug output for KMS encryption

1.4.3beta (2018-02)

- Add capability to use either client or server side KMS encryption. This is set via the config param 'catdv.s3archive_service.use_kms_encryption', which should now be set to 'client', 'server' or left unset for no encryption. For backwards compatibility, a value of 'true' sets server side encryption and a value of false no encryption.

- Add config param 'catdv.s3archive_service.allow_override' to enable override of glacier tier and archive location from the UI.

- Add config param 'catdv.s3archive_service.location_mapping' to enable batching of file archive locations to be turned off - i.e. subsequent archives replace the existing file(s), rather than creating new 'date / time stamped' copies.

- Fix to ensure server config values that affect the UI are picked up on startup when archive service is standalone.

- Fix exponential back off timing for archive job retries

- Add date last restored as an archive parameter

1.4.2betap1 (2018-01-22)

- Fix for archive service start

1.4.2beta (2018-01-09)

Upgrade

- run catdvarchiveplugins1.4.2.sql against the CatDV DB to rename a job data field and to update the textual

job and clip archive status values for consistency

Changes

- Add capability to run plugin archive service standalone, via config param catdv.s3archive_service.service_mode

- Add capability to restore from Glacier deep storage. If the object being restored is in deep storage, a request is made to transfer the object from deep storage to S3, where it will be available for a specified number of days (2 days by default, can be configured via catdv.s3archive_service.glacier_restore_expiration). The archive service will attempt to periodically retry the restore job until the object is available on S3 and can be fully restored or the maximum number of retries has been attempted.

- Add capability to use AWS KMS to encrypt objects in transit to / from and at rest on Amazon S3. This requires AWS KMS to be set up so that the SDK can get the Customer Master Key(CMK) for encryption. Set the new config param 'catdv.s3archive_service.use_kms_encryption' to 'true' to turn KMS encryption on and ensure that a CMK ID is provided via the Manage service command.

- Provision for providing more detailed status information for jobs and (archiving) files

- Add detailed job status and job data to Job Details pane in the Service Job Queue UI

- Fix for archive failure due to special characters in the file path / name (such as ™), with an error message of "The request signature we calculated does not match the signature you provided". The original file path in the object metadata is now url encoded on archive and decoded on metadata retrieval.

- New config param 'catdv.s3archive_service.max_jobs_to_display' for configuring the maximum number of jobs which will be listed in the job queue. This overrides days_to_display and defaults to 1000

- Increased default value for 'catdv.s3archive_service.loop_delay' to 30 seconds

- Increased default value for 'catdv.s3archive_service.concurrent_transfer_limit' to 4

- Increased default value for 'catdv.s3archive_service.days_to_display' to 10

- Updated Amazon AWS for Java library to version 1.11.221

1.4.1betap6 (2017-12-22)

- Fixes for worker plugin commands

1.4.1betap5 (2017-12-20)

- Return json response message from WorkerPurgeCommand

- Include metaclipID where applicable in error details for worker backup / restore / purge commands

1.4.1betap4 (2017-12-12)

- Fix bulk backup and restore commands to capture and report unexpected errors queuing child jobs

- Update worker backup and restore commands to return a JSON representation of the archive result in the command response.

1.4.1betap3 (2017-12-01)

- Fix to ignore clips with duplicate mapped file paths, before processing to queue jobs

1.4.1betap2

- New hidden versions of backup (copy/move) and purge commands for use by worker

1.4.1betap1 (2017-11-16)

- Fix to ensure archive metadata for source media with matching media paths are updated simultaneously

- Additional service logging, new config param 'catdv.s3archive_service.debug' now turns on/off most logging after initial plugin startup

1.4.1beta (2017-06-09)

Upgrade

- remove 'catdv.rest_api.*' config properties in Server Config on CatDV control panel

- run catdvarchiveplugins1.3.5.sql against the CatDV DB to fix the field type for archive date fields

Changes

- major update to job queue command UI, utilising new features provided by version 3 of plugin framework

- make API calls in process (inside server) from plugin commands and from archive service when running inside server (depends on version 3 of plugin framework)

1.3.5beta (2017-06-09)

Upgrade

- change config property 'catdv.rest_api.client_key' to 'catdv.s3archive_service.licence_code' in Server Config on CatDV control panel

- run catdvarchiveplugins1.3.5.sql against the CatDV DB to fix the field type for archive date fields

Changes

- improve job processing loop so that job priorities are always respected: only take one job from the queue at a time, when waiting job delays have elapsed re-queue jobs rather than processing waiting jobs directly (note re-queue jobs of equal priority will be processed before newer jobs, as the lowest job id will be processed first)

- ensure that transfers to S3 never block the job processing loop, keeping the connection status up to date

- improve processing of multiple concurrent transfers plus config param max_jobs_running_delay for optimisation

- improve handling when S3 is unavailable: after about 30s the service status updates to "Running (offline)"

- restart orphaned "in progress" jobs (e.g. from a server restart) more quickly

- switch to using statusCode as primary status value (except job queries which requires latest server)

- add support for pausing a service (processing transfers)

- improve plugin licence handling

- update to version 1.11.127 of Amazon AWS Java SDK

- script to fix archive date fields in desktop client for existing installations

- fix NPE when clip has no metadata

1.3.4beta3 (2017-03-02)

- Option to specify destination location 'path' when copying / moving files to Amazon S3 from the worker

- Add config param "catdv.s3archive_service.restrict_command_access" which can be used to hide 'all' plugin commands or 'config' commands (currently Manage Service). See README for details.

- fix archive date fields in desktop client for new installations

1.3.4beta2 (2017-03-02)

- Option to enter bucket name when copying / moving files to Amazon S3. The bucket name on the Manage Service screen is now only the default value filled in the first time a user attempts to copy or move files. Once the user has entered a bucket, a subsequent move / copy will default to the last value entered. The bucket warning and "Apply bucket to queued jobs" checkbox are no longer on the Manage Service screen as changes there now only affect the default bucket for a new user of the plugin.

- to support HGST/StrongBox archives, add new config param catdv.s3archive_service.signer_override and upgrade to aws-java-sdk-1.11.52 (along with dependencies: joda-time-2.9.5.jar, httpcore-4.4.5.jar, httpclient-4.5.2.jar)

1.3.3 (2016-12-07)

Upgrade

- generate a rest api licence code registered to the customer for client 'ARCS3' and add to the configured properties:

catdv.rest_api.client_key=<generated_licence_code>

- run catdvs3plugin1.3.3.sql against the CatDV DB to set status codes for existing Jobs

- check in web admin that there are not two Amazon S3 archive panels

- if there are, delete the one that is not referenced by the data field of the service table entry for 'AmazonS3'

Changes

- use a separate license pool for each archive plugin's access to the Rest API

- fixes for archiving of complex media

- complex media archive status reflects the archive status of it's contents (clip media 'Archive Details' field

now updated along with the archive values stored in clip media metadata)

- rename package squarebox.util to squarebox.archiveplugin.util to avoid name clashes in the server

- add more detail to successful output for job results

- change location of files on S3 archive: <batch id>/<actual file path>

- the batch id is the date/time that a selection of files was scheduled for archive to S3

- ** this means files on the archive will no longer be overwritten, but may have multiple copies

- propagate changes to archive metadata to all source media with the same value for mediaPath

- set status code on archive Jobs along with text status

- purge empty directories along with files when using the 'Schedule move... ' or 'Purge files...' commands

- the top two directory levels are always preserved even if empty, e.g. '/volumes/media/'

- automatically create field definition(s) for any new archive metadata and add them to the plugin's archive panel

- throw a meaningful error if any of the properties required by the RestAPI are not set

1.3.2p1 (2016-09-21)

- patch for NullPointerException when scheduling transfers for clips with no existing metadata

1.3.2 (2016-09-08)

Upgrade

- Run catdvs3plugin1.3.2.sql against the CatDV DB to update the Job types

Changes

- fix for archiving mapped paths

- changed plugin command names and job types for clarity

1.3.1 (2016-08-10)

Initial production version

** If you previously had a beta version installed:

- any existing archive jobs should be removed from the database

- any manually created archive panel should be removed

- any existing archive data will be ignored (field names now prefixed with service type)

Copyright © Square Box Systems Ltd. 2002-2019. All rights reserved.