BackupTask
extends Task
in package
uses
LocatorAwareTrait
Scheduled backup queue task.
Triggered by cron to create encrypted backups on a schedule.
Table of Contents
Properties
- $costs : int
- Activate this if you want cost management per server to avoid server overloading.
- $QueuedJobs : QueuedJobsTable
- $queueModelClass : string
- $rate : int
- Rate limiting per worker in seconds.
- $retries : int|null
- Number of times a failed instance of this task should be restarted before giving up.
- $timeout : int|null
- Timeout in seconds, after which the Task is reassigned to a new worker if not finished successfully.
- $unique : bool
- Set to true if you want to make sure this specific task is never run in parallel, neither on the same server, nor any other server. Any worker running will not fetch this task, if any job here is already in progress.
- $io : Io
- $logger : LoggerInterface|null
Methods
- __construct() : mixed
- run() : void
- Main execution of the task.
- taskName() : string
- cleanOldBackups() : void
- Delete backups older than the configured retention period.
Properties
$costs
Activate this if you want cost management per server to avoid server overloading.
public
int
$costs
= 0
Expensive tasks (CPU, memory, ...) can have 1...100 points here, with higher points preventing a similar cost intensive task to be fetched on the same server in parallel. Smaller ones can easily still be processed on the same server if some an expensive one is running.
$QueuedJobs
public
QueuedJobsTable
$QueuedJobs
$queueModelClass
public
string
$queueModelClass
= 'Queue.QueuedJobs'
$rate
Rate limiting per worker in seconds.
public
int
$rate
= 0
Activate this if you want to stretch the processing of a specific task per worker.
$retries
Number of times a failed instance of this task should be restarted before giving up.
public
int|null
$retries
= 1
Defaults to Config::defaultworkerretries().
$timeout
Timeout in seconds, after which the Task is reassigned to a new worker if not finished successfully.
public
int|null
$timeout
= 600
This should be high enough that it cannot still be running on a zombie worker (>> 2x). Defaults to Config::defaultworkertimeout().
$unique
Set to true if you want to make sure this specific task is never run in parallel, neither on the same server, nor any other server. Any worker running will not fetch this task, if any job here is already in progress.
public
bool
$unique
= false
$io
protected
Io
$io
$logger
protected
LoggerInterface|null
$logger
= null
Methods
__construct()
public
__construct([Io|null $io = null ][, LoggerInterface|null $logger = null ]) : mixed
Parameters
- $io : Io|null = null
-
IO
- $logger : LoggerInterface|null = null
run()
Main execution of the task.
public
run(array<string, mixed> $data, int $jobId) : void
Parameters
- $data : array<string, mixed>
-
Not used for scheduled backups
- $jobId : int
-
Queue job ID
taskName()
public
static taskName() : string
Tags
Return values
stringcleanOldBackups()
Delete backups older than the configured retention period.
private
cleanOldBackups(mixed $backupsTable, BackupStorageService $storage, mixed $appSettings) : void
Parameters
- $backupsTable : mixed
- $storage : BackupStorageService
- $appSettings : mixed