Template Content
About the integration
🕹 Features
📍Requirements
Tempo worklogs migration is a pre-built template from ScriptRunner Connect. To run this migration in production instance, a paid subscription is required, however you can try it free.
🐙 What is ScriptRunner Connect?
A code-first integration platform that enables you to script in JavaScript/TypeScript, collaborate and connect apps within an AI assisted coding environment to solve complex integration challenges.
🧩 Pre-built integration templates package pre-written code and specific features to help you set up integrations quickly. See full list.
Template Content
This template facilitates the migration of Tempo Timesheets worklogs between cloud instances. It supports resumable migrations, allowing you to abort/pause and resume the process as needed. Failed worklog migrations are logged, and you can re-run the migration script to re-try migrating only the failed worklogs.
Included scripts:
MigrateWorklogs
- Initiates the migration process.ResetMigration
- Resets the migration process.GenerateDummyIssues
- Creates dummy issues and worklogs for testing.OnGenerateReport
- Generates a report on the migration status.Sync HTTP event listener
.DEFAULT_AUTHOR_ACCOUNT_ID
parameter in the Parameters
.MigrateWorklogs
script manually. You can abort the process anytime by clicking Abort Invocation in the console.💡 Tip: Clearing the console logs removes the abort option. If the console is cleared, wait 15 minutes for the script to restart (and you'll receive a new kickoff message and abort option).
There are two primary ways to review your migration details:
VERBOSE
parameter for even more information.Sync HTTP event listener
into your browser. The summary includes:MAX_DISPLAYED_BATCHES
in the parameters Report
folder to change the number of batches displayed)MAX_DISPLAYED_FAILURES
in the parameters Report
folder to change the number of failures displayed)ℹ️ Note: Worklogs per Second
measures overall speed while Worklogs per Second per Batch
reflects the average time per batch.
ℹ️ Note: The migration script can restart itself up to 200 times, allowing for a maximum runtime of about 40 hours. If you need to run longer than that, you will need to resume the migration manually after 200 restarts have been exhausted.
Concurrency: Adjust ATLASSIAN_API_CONCURRENCY
and TEMPO_API_CONCURRENCY
in parameters for optimal performance. Monitor the reporting page for throttling issues and adjust accordingly.
💡 Tip: Run only one migration at a time. For parallel migrations, create a separate workspace and adjust API concurrency limits.
💡 Tip: Reduce TEMPO_API_CONCURRENCY
if experiencing 500 errors during retries.
💡 Tip: Upgrade ScriptRunner Connect plan to make use of higher internal rate limits to speed up the migration.
Batch Size: The JIRA_JQL_SEARCH_API_PAGE_SIZE
parameter controls batch size. Larger values may be beneficial if worklogs are sparsely populated amongst issues.
💡 Tip: Use the ResetMigration
script if restarting the migration process. Ensure worklog deletion is enabled if needed.
To retry failed worklogs:
BATCH_TYPE
to FAILED_WORKLOGS
in parameters.MigrateWorklogs
script.If re-tries are successful, you will see a reduction of worklogs in the Failed worklogs
section of the report page.
ℹ️ Note: When running in FAILED_WORKLOGS
mode or resuming a paused migration, some metrics (e.g., Issues Processed, Worklogs Per Second, etc.) may not accurately reflect the state of the source instance.
When you run the migration script, here's what will happen:
RecordStorage
. If no existing state is found, a new state is created.getTargetUserAccountId
function if this method doesn't work for your needs.Batch counters reset.
The next batch of issues is processed:
An issue search is conducted from a JQL query.
ℹ️ Note: If in FAILED_WORKLOGS
mode, a list of failed worklog parent issue IDs is generated which is limited to the page size of JIRA_JQL_SEARCH_API_PAGE_SIZE
parameter.
Worklogs are retrieved from the source instance.
Issue mapping cache is populated. The getTargetIssueId
function finds corresponding issues in the target instance.
ℹ️ Note: By default, target issues are located by matching issue key values. Adjust the getTargetIssueId
function if necessary.
If enabled, existing worklogs from the target instance in this batch are deleted.
ℹ️ Note: If you don't have any prior worklogs in the target instance that need to be deleted, you can disable this step to save time. However, when you're resuming an aborted migration, even when retrying failed worklogs, you must enable this option to avoid duplicated worklogs.
Worklogs in the batch are migrated.
RecordStorage
.KEEP_REMAINING_ESTIMATE
(default true
) to prevent Tempo from re-calculating remaining estimate while migrating worklogs.import dayjs from 'dayjs';
import TempoCloudSource from './api/tempo/cloud/source';
import JiraCloudTarget from './api/jira/cloud/target';
import JiraCloudSource from './api/jira/cloud/source';
import { JiraCloudApi } from '@managed-api/jira-cloud-v3-sr-connect';
import { retry } from '@managed-api/commons-core';
import { getEnvVars, ThrottleAwareResult } from './Utils';
/**
* This script generates dummy issues on source and target instance and creates random number of (from 1-10) worklogs for each source issue. Function runs up 15 minutes.
**/
export default async function (event: any, context: Context): Promise<void> {
const user = await JiraCloudSource.Myself.getCurrentUser();
while (true) {
const time = Date.now();
const issueTasks: Promise<CreateDummyIssuePairResult>[] = [];
for (let i = 0; i < getEnvVars(context).GenerateTestData.CONCURRENCY; i++) {
issueTasks.push(createDummyIssuePair(context, user.accountId ?? ''));
}
const results = await Promise.all(issueTasks);
console.log(`Batch done`, {
time: `${(Date.now() - time) / 1000} seconds`,
worklogsCreated: results.reduce((prev, current) => prev + current.worklogsCreated, 0),
atlassianThrottleCount: results.reduce((prev, current) => prev + current.atlassianThrottleCount, 0),
tempoThrottleCount: results.reduce((prev, current) => prev + current.tempoThrottleCount, 0),
});
if (Date.now() > context.startTime + context.timeout - 60 * 1000) {
break;
}
}
}
async function createDummyIssuePair(context: Context, accountId: string): Promise<CreateDummyIssuePairResult> {
try {
const worklogsCount = Math.ceil(Math.random() * 100 / 10);
const summary = `Test issue ${worklogsCount}`;
const [sourceIssue, targetIssue] = await Promise.all([createIssue(context, JiraCloudSource, summary), createIssue(context, JiraCloudTarget, summary)]);
const worklogTasks: Promise<CreateWorklogResult>[] = [];
for (let i = 0; i < worklogsCount; i++) {
worklogTasks.push(createWorklog(sourceIssue.issueId, accountId));
}
const worklogs = await Promise.all(worklogTasks);
return {
worklogsCreated: worklogs.filter(w => !!w.tempoWorklogId).length,
atlassianThrottleCount: sourceIssue.throttleCount + targetIssue.throttleCount,
tempoThrottleCount: worklogs.reduce((prev, current) => prev + current.throttleCount, 0)
}
} catch (e) {
console.error('Failed to create dummy issue', e);
}
return {
worklogsCreated: 0,
atlassianThrottleCount: 0,
tempoThrottleCount: 0
}
}
async function createIssue(context: Context, instance: JiraCloudApi, summary: string): Promise<CreateIssueResult> {
let throttleCount = 0;
const project = await instance.Project.getProject({
projectIdOrKey: getEnvVars(context).GenerateTestData.PROJECT_KEY,
});
const issueTypes = await instance.Issue.Type.getTypesForProject({
projectId: +(project.id ?? 0)
});
const issueType = issueTypes.find(it => it.name === getEnvVars(context).GenerateTestData.ISSUE_TYPE);
if (!issueType) {
throw Error('Issue Type not found');
}
const issue = await instance.Issue.createIssue({
body: {
fields: {
project: {
id: project.id ?? ''
},
issuetype: {
id: issueType.id ?? ''
},
summary
}
},
errorStrategy: {
handleHttp429Error: (response) => {
console.log('JIRA THROTTLED', response);
throttleCount++;
return retry(1000);
}
}
});
return {
issueId: +(issue.id ?? '0'),
issueKey: issue.key,
throttleCount
}
}
async function createWorklog(issueId: number, authorAccountId: string): Promise<CreateWorklogResult> {
let throttleCount = 0;
try {
const timeSpentSeconds = Math.ceil(Math.random() * 100 / 10) * 60 * 60;
const remainingEstimateSeconds = Math.ceil(Math.random() * 100 / 10) * 60 * 60;
const billableSeconds = Math.ceil(Math.random() * 100 / 10) * 60 * 60;
const now = dayjs(new Date());
const worklog = await TempoCloudSource.Worklog.createWorklog({
body: {
issueId,
authorAccountId,
timeSpentSeconds,
remainingEstimateSeconds,
billableSeconds,
startDate: now.format('YYYY-MM-DD'),
startTime: now.format('HH:mm:ss'),
description: `Work logged for ${timeSpentSeconds / 60 / 60}h`,
},
errorStrategy: {
handleHttp429Error: () => {
throttleCount++;
return retry(1000);
}
}
});
return {
tempoWorklogId: worklog.tempoWorklogId,
throttleCount
}
} catch (e) {
console.error('Failed to create worklog', e, {
issueId,
authorAccountId
});
}
return {
throttleCount
};
}
interface CreateDummyIssuePairResult {
worklogsCreated: number;
atlassianThrottleCount: number;
tempoThrottleCount: number;
}
interface CreateIssueResult extends ThrottleAwareResult {
issueId: number;
issueKey?: string;
}
interface CreateWorklogResult extends ThrottleAwareResult {
tempoWorklogId?: number;
}
import JiraCloudTarget from './api/jira/cloud/target';
import TempoCloudTarget from './api/tempo/cloud/target';
import TempoCloudSource from './api/tempo/cloud/source';
import JiraCloudSource from './api/jira/cloud/source';
import { RecordStorage } from '@sr-connect/record-storage';
import { ArrayElement, FailedWorklog, MigrationState, BatchType, RecordStorageKeys, IssuePair, getEnvVars } from './Utils';
import { IssueBeanAsResponse } from '@managed-api/jira-cloud-v3-core/definitions/IssueBeanAsResponse';
import { WorklogAsResponse } from '@managed-api/tempo-cloud-v4-core/definitions/WorklogAsResponse';
import { JiraCloudApi } from '@managed-api/jira-cloud-v3-sr-connect';
import { GetUsersResponseOK } from '@managed-api/jira-cloud-v3-core/types/user';
import { TempoCloudApi } from '@managed-api/tempo-cloud-v4-sr-connect';
import { ForbiddenError, retry, TooManyRequestsError } from '@managed-api/commons-core';
import { throttleAll } from 'promise-throttle-all';
import { triggerScript } from '@sr-connect/trigger';
import { CommonError as AtlassianCommonError } from '@managed-api/jira-cloud-v3-core/errorStrategy';
import { CommonError as TempoCommonError } from '@managed-api/tempo-cloud-v4-core/errorStrategy';
import dayjs from 'dayjs';
const issueMappingCache: Record<number, number> = {};
let userMappingCache: Record<string, string> = {};
let srcThrottleCount = 0;
let atlassianThrottleCount = 0;
let tempoThrottleCount = 0;
let migratedWorklogsCount = 0;
let failedWorklogsCount = 0;
let deletionFailures = 0;
let failedWorklogs: FailedWorklog[] = [];
let isTimeLeftForRetry: () => boolean;
/**
* This function migrates worklogs either by finding source issues by JQL expression or re-trying failed worklogs.
*/
export default async function (event: any, context: Context): Promise<void> {
const startTime = Date.now();
init(context);
const storage = new RecordStorage();
const [migrationState] = await Promise.all([loadMigrationState(storage, startTime), loadUserMapping(context, storage)]);
while (true) {
resetBatchCounters(migrationState);
const batchTime = Date.now();
const issuesProcessed = await processBatchOfIssues(context, migrationState);
const timeSpent = Date.now() - batchTime;
console.info(`Batch of ${issuesProcessed} issue worklogs migrated. Successfully migrated worklogs: ${migratedWorklogsCount}. Failed worklogs: ${failedWorklogsCount}. SR Connect throttle count: ${srcThrottleCount}. Atlassian throttle count: ${atlassianThrottleCount}. Tempo throttle count: ${tempoThrottleCount}. Time spent: ${getTimeSpent(batchTime)}.`)
updateMigrationState(context, migrationState, issuesProcessed, timeSpent);
await storage.setValue(RecordStorageKeys.MIGRATION_STATE, migrationState);
if (issuesProcessed === 0) {
if (getEnvVars(context).BATCH_TYPE === 'JQL') {
console.log('No more issues found to migrate, exiting job.');
}
if (getEnvVars(context).BATCH_TYPE === 'FAILED_WORKLOGS') {
console.log('No more worklogs found to re-try migration, existing job.');
}
break;
}
if (!isTimeLeftForNextBatch(context, migrationState)) {
console.log('No time left to run a next batch of issues in this invocation, starting a new one.');
await triggerScript('MigrateWorklogs');
break;
}
}
}
/**
* This function finds the target issue ID based on a source issue data, modify this function to suit your target issue lookup strategy.
*/
async function getTargetIssueId(sourceIssue: IssueBeanAsResponse): Promise<number | undefined> {
const issue = await JiraCloudTarget.Issue.getIssue({
issueIdOrKey: sourceIssue.key,
errorStrategy: {
handleHttp429Error: handleAtlassian429Error
}
});
return +(issue.id ?? '');
}
/**
* This function find a user from target users list based on a source user data, modify this function to suit your target user lookup strategy.
*/
async function getTargetUserAccountId(targetUsers: GetUsersResponseOK, sourceUser: ArrayElement<GetUsersResponseOK>): Promise<string | undefined> {
return sourceUser.accountId;
// Uncommenet this part if you wish to match users by email, but then all users should have visible email addresses
// if (!sourceUser.emailAddress) {
// throw Error('Source user\'s email address is not visisble');
// }
// return targetUsers.find(u => u.emailAddress?.toLowerCase() === sourceUser.emailAddress?.toLowerCase())?.accountId;
}
function init(context: Context) {
if (getEnvVars(context).BATCH_TYPE === 'FAILED_WORKLOGS' && !getEnvVars(context).DELETE_EXISTING_WORKLOGS) {
throw Error('DELETE_EXISTING_WORKLOGS is set to false when re-trying failed worklogs, this option needs to be enabled')
}
isTimeLeftForRetry = () => context.startTime + context.timeout - getEnvVars(context).Advanced.RETRY_CUTOFF_TIME * 1000 > Date.now();
resetErrorStrategies();
}
function resetErrorStrategies() {
// Reset global error strategies (remove default re-try logic), since we'll be defining our own re-try behaviour when getting throttled.
JiraCloudSource.setGlobalErrorStrategy(null);
JiraCloudTarget.setGlobalErrorStrategy(null);
TempoCloudSource.setGlobalErrorStrategy(null);
TempoCloudTarget.setGlobalErrorStrategy(null);
}
function resetBatchCounters(migrationState: MigrationState) {
srcThrottleCount = 0;
atlassianThrottleCount = 0;
tempoThrottleCount = 0;
migratedWorklogsCount = 0;
failedWorklogsCount = 0;
deletionFailures = 0;
failedWorklogs = migrationState.failedWorklogs;
}
function updateMigrationState(context: Context, migrationState: MigrationState, issuesCount: number, timeSpent: number) {
migrationState.migratedWorklogsCount += migratedWorklogsCount;
migrationState.failedWorklogsCount = failedWorklogs.length;
migrationState.deletionFailures += deletionFailures;
migrationState.srcThrottleCount += srcThrottleCount;
migrationState.atlassianThrottleCount += atlassianThrottleCount;
migrationState.tempoThrottleCount += tempoThrottleCount;
migrationState.batches.push({
batchType: getEnvVars(context).BATCH_TYPE,
migratedWorklogsCount,
srcThrottleCount,
atlassianThrottleCount,
tempoThrottleCount,
timeSpent,
deletionFailures,
failedWorklogsCount,
completionDate: new Date().toUTCString()
});
if (getEnvVars(context).BATCH_TYPE === 'JQL') {
migrationState.issuesProcessed += issuesCount;
}
if (issuesCount === 0) {
migrationState.endTime = Date.now();
} else {
migrationState.endTime = undefined;
}
}
async function loadUserMapping(context: Context, storage: RecordStorage) {
const userMapping = await storage.getValue<typeof userMappingCache>(RecordStorageKeys.USER_MAPPING);
if (userMapping) {
userMappingCache = userMapping;
} else {
console.log('Getting users from source and target instances...');
const usersTime = Date.now();
const [sourceUsers, targetUsers] = await Promise.all([getUsers(JiraCloudSource), getUsers(JiraCloudTarget)]);
console.log(`Found ${sourceUsers.length} users from source and ${targetUsers.length} users from target instance. Time spent: ${getTimeSpent(usersTime)}`);
const userCacheTime = Date.now();
console.log(`Populating users mapping cache for ${sourceUsers.length} source users...`);
await throttleAll(getEnvVars(context).ATLASSIAN_API_CONCURRENCY, sourceUsers.filter(user => user.accountType !== 'app').map(user => () => populateUserMappingCache(context, targetUsers, user)));
console.log(`User mapping cache populated. Time spent: ${getTimeSpent(userCacheTime)}`);
await storage.setValue(RecordStorageKeys.USER_MAPPING, userMappingCache);
}
}
async function loadMigrationState(storage: RecordStorage, startTime: number) {
let migrationState = await storage.getValue<MigrationState>(RecordStorageKeys.MIGRATION_STATE);
if (!migrationState) {
migrationState = {
startTime,
issuesProcessed: 0,
migratedWorklogsCount: 0,
failedWorklogsCount: 0,
srcThrottleCount: 0,
atlassianThrottleCount: 0,
tempoThrottleCount: 0,
deletionFailures: 0,
batches: [],
failedWorklogs: [],
}
}
return migrationState;
}
function isTimeLeftForNextBatch(context: Context, migrationState: MigrationState) {
const batches = migrationState?.batches ?? [];
const averageBatchTime = batches.reduce((prev, current) => prev + current.timeSpent, 0) / batches.length;
let timeRequiredForNextBatch = averageBatchTime * getEnvVars(context).Advanced.BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER;
if (timeRequiredForNextBatch < getEnvVars(context).Advanced.BATCH_CYCLE_MIN_TIME * 1000) {
timeRequiredForNextBatch = getEnvVars(context).Advanced.BATCH_CYCLE_MIN_TIME * 1000;
};
return context.startTime + context.timeout - timeRequiredForNextBatch > Date.now();
}
async function processBatchOfIssues(context: Context, migrationState: MigrationState) {
if (getEnvVars(context).BATCH_TYPE === 'JQL') {
logIfVerbose(context, `Getting issues from source instance, skipping ${migrationState.issuesProcessed} issues...`);
}
if (getEnvVars(context).BATCH_TYPE === 'FAILED_WORKLOGS') {
logIfVerbose(context, `Getting issues of failed worklogs from source instance...`);
}
const sourceIssuesTime = Date.now();
const failedIssueIds = Array.from(new Set(migrationState.failedWorklogs.filter(w => !!w.issueId).map(w => w.issueId!))).slice(0, getEnvVars(context).JIRA_JQL_SEARCH_API_PAGE_SIZE);
if (getEnvVars(context).BATCH_TYPE === 'FAILED_WORKLOGS' && failedIssueIds.length === 0) {
return 0;
}
const issues = await getIssuesFromJQL(context, migrationState, failedIssueIds);
console.log(`Found ${issues.length} issues from source instance. Time spent ${getTimeSpent(sourceIssuesTime)}. Issues: ${(issues).map(i => i.key).join(', ')}.`);
if (issues.length === 0) {
return 0;
}
const sourceWorklogs = await getSourceInstanceWorklogs(context, issues);
const uniqueSourceIssueIds = Array.from(new Set(sourceWorklogs.filter(w => !!w.issue.id).map(w => w.issue.id!)));
await populateIssueMappingCache(context, issues, uniqueSourceIssueIds);
if (getEnvVars(context).DELETE_EXISTING_WORKLOGS && uniqueSourceIssueIds.length > 0) {
try {
await deleteExistingWorklogs(context, uniqueSourceIssueIds, sourceWorklogs);
} catch (e) {
console.error(`Failed to delete existing worklogs in target instance: ${sourceWorklogs.map(w => w.tempoWorklogId).join(', ')}`, e);
if (getEnvVars(context).HALT_WHEN_WORKLOG_DELETION_FAILS) {
throw e;
}
}
}
await migrateWorklogs(context, sourceWorklogs);
return issues.length;
}
async function getIssuesFromJQL(context: Context, migrationState: MigrationState, failedIssueIds: number[]) {
const searchResult = await JiraCloudSource.Issue.Search.searchByJql({
body: {
jql: getEnvVars(context).BATCH_TYPE !== 'FAILED_WORKLOGS' ? getEnvVars(context).JQL : `issue in (${failedIssueIds.join(', ')})`,
startAt: getEnvVars(context).BATCH_TYPE === 'JQL' ? migrationState.issuesProcessed : undefined,
maxResults: getEnvVars(context).JIRA_JQL_SEARCH_API_PAGE_SIZE
},
errorStrategy: {
handleHttp429Error: handleAtlassian429Error
}
});
if ((searchResult.maxResults ?? 0) < getEnvVars(context).JIRA_JQL_SEARCH_API_PAGE_SIZE) {
console.warn(`JIRA_JQL_SEARCH_API_PAGE_SIZE is larger than the actual allowed maximum: ${searchResult.maxResults}`);
}
return searchResult.issues ?? [];
}
async function getSourceInstanceWorklogs(context: Context, issues: IssueBeanAsResponse[]): Promise<IssueWorklog[]> {
logIfVerbose(context, 'Getting source instance worklogs...');
const worklogsTime = Date.now();
const sourceWorklogs = await getWorklogs(context, TempoCloudSource, issues.map(i => +(i.id ?? '0')));
try {
return sourceWorklogs.map(w => ({
...w,
remainingEstimateSeconds: getEnvVars(context).KEEP_REMAINING_ESTIMATE ? issues.find(i => i.id === w.issue.id.toString())?.fields.timeestimate : undefined
}))
} finally {
logIfVerbose(context, `Found ${sourceWorklogs.length} worklogs from source instance. Time spent: ${getTimeSpent(worklogsTime)}.`);
}
}
async function populateIssueMappingCache(context: Context, issues: IssueBeanAsResponse[], uniqueSourceIssueIds: number[]) {
logIfVerbose(context, `Populating issue mapping cache for ${uniqueSourceIssueIds.length} issues...`);
const populateCacheTime = Date.now();
await throttleAll(getEnvVars(context).ATLASSIAN_API_CONCURRENCY, Array.from(uniqueSourceIssueIds).map(id => () => addIssueMappingToCache(issues, id)));
logIfVerbose(context, `Issue mapping cache populated. Time spent: ${getTimeSpent(populateCacheTime)}.`);
}
async function deleteExistingWorklogs(context: Context, uniqueSourceIssueIds: number[], sourceWorklogs: WorklogAsResponse[]) {
const issuePairs: IssuePair[] = uniqueSourceIssueIds.map(id => ({
sourceId: id,
targetId: getTargetIssueIdFromCache(id)
}));
logIfVerbose(context, `Finding target worklogs for ${issuePairs.length} issues for deletion...`);
const targetWorklogTime = Date.now();
const targetWorklogs = await getWorklogs(context, TempoCloudTarget, issuePairs.map(ip => ip.targetId));
logIfVerbose(context, `Found ${targetWorklogs.length} worklogs from target instance. Time spent: ${getTimeSpent(targetWorklogTime)}.`);
const worklogDeletionTime = Date.now();
logIfVerbose(context, `Deleting ${targetWorklogs.length} worklogs from target instance...`);
await throttleAll(getEnvVars(context).TEMPO_API_CONCURRENCY, targetWorklogs.map((w, i) => () => deleteWorklog(context, issuePairs, w, sourceWorklogs, i)));
logIfVerbose(context, `Worklogs deleted. Time spent ${getTimeSpent(worklogDeletionTime)}.`);
}
async function migrateWorklogs(context: Context, sourceWorklogs: IssueWorklog[]) {
logIfVerbose(context, `Copying ${sourceWorklogs.length} worklogs...`);
const copyWorklogsTime = Date.now();
await throttleAll(getEnvVars(context).TEMPO_API_CONCURRENCY, sourceWorklogs.map((w, i) => () => migrateWorklog(context, w, i)));
logIfVerbose(context, `Worklogs copied. Time spent: ${getTimeSpent(copyWorklogsTime)}`);
}
function logIfVerbose(context: Context, message: string) {
if (getEnvVars(context).VERBOSE) {
console.log(message);
}
}
function handleAtlassian429Error(error: TooManyRequestsError<AtlassianCommonError>) {
// TODO: follow Retry-After and other Atlassian specific headers: https://developer.atlassian.com/cloud/jira/platform/rate-limiting/
if (error.response.headers.has('x-stitch-rate-limit')) {
srcThrottleCount++;
} else {
atlassianThrottleCount++;
}
if (!isTimeLeftForRetry()) {
throw Error('No time left for re-trying throttled Atlassian API call');
}
return retry(1000);
}
function handleTempo429Error(error: TooManyRequestsError<TempoCommonError>) {
if (error.response.headers.has('x-stitch-rate-limit')) {
srcThrottleCount++;
} else {
tempoThrottleCount++;
}
if (!isTimeLeftForRetry()) {
throw Error('No time left for re-trying throttled Tempo API call');
}
return retry(1000);
}
function getTimeSpent(startTime: number) {
return `${((Date.now() - startTime) / 1000).toFixed(2)} seconds`;
}
async function getWorklogs(context: Context, instance: TempoCloudApi, issueIds: number[]) {
const worklogs: WorklogAsResponse[] = [];
let offset = 0
const from = dayjs(getEnvVars(context).FROM_DATE).format('YYYY-MM-DD');
const to = dayjs(getEnvVars(context).TO_DATE).format('YYYY-MM-DD');
do {
const result = await instance.Worklog.getWorklogs({
from,
to,
issueId: issueIds,
limit: getEnvVars(context).TEMPO_GET_WORKLOGS_API_PAGE_SIZE,
offset,
errorStrategy: {
handleHttp429Error: handleTempo429Error
}
});
if (result.metadata.limit < getEnvVars(context).TEMPO_GET_WORKLOGS_API_PAGE_SIZE) {
console.warn(`TEMPO_GET_WORKLOGS_PAGE_SIZE is larger than the actual allowed maximum: ${result.metadata.limit}`);
}
worklogs.push(...result.results);
if (result.metadata.count === result.metadata.limit) {
offset += result.metadata.limit;
} else {
offset = 0;
}
} while (offset > 0)
return worklogs;
}
async function migrateWorklog(context: Context, worklog: IssueWorklog, index: number) {
try {
if (!worklog.issue.id) {
throw new Error(`Worklog issue ID is missing`);
}
if (index < getEnvVars(context).Simulation.SIMULATED_MIGRATION_FAILURES) {
throw Error('Simulated worklog migration failure');
}
const targetIssueId = getTargetIssueIdFromCache(worklog.issue.id ?? 0);
const targetUserAccountId = getTargetUserAccountIdFromCache(worklog.author.accountId);
try {
await TempoCloudTarget.Worklog.createWorklog({
body: {
authorAccountId: targetUserAccountId,
issueId: targetIssueId,
startDate: worklog.startDate,
startTime: worklog.startTime,
timeSpentSeconds: worklog.timeSpentSeconds,
attributes: (worklog.attributes.values ?? []).filter(a => !!a.value),
billableSeconds: getEnvVars(context).MIGRATE_BILLABLE_SECONDS ? worklog.billableSeconds : undefined,
description: worklog.description,
remainingEstimateSeconds: worklog.remainingEstimateSeconds
},
errorStrategy: {
handleHttp429Error: handleTempo429Error
}
});
} catch (e) {
if (e instanceof ForbiddenError && getEnvVars(context).DEFAULT_AUTHOR_ACCOUNT_ID) {
if (getEnvVars(context).VERBOSE) {
console.warn(`User (${targetUserAccountId}) is not allowed to log work in target instance, logging worklog with default account instead.`);
}
await TempoCloudTarget.Worklog.createWorklog({
body: {
authorAccountId: getEnvVars(context).DEFAULT_AUTHOR_ACCOUNT_ID,
issueId: targetIssueId,
startDate: worklog.startDate,
startTime: worklog.startTime,
timeSpentSeconds: worklog.timeSpentSeconds,
attributes: (worklog.attributes.values ?? []).filter(a => !!a.value),
billableSeconds: getEnvVars(context).MIGRATE_BILLABLE_SECONDS ? worklog.billableSeconds : undefined,
description: `${worklog.description}\nLogged on behalf of the user: ${targetUserAccountId}`,
remainingEstimateSeconds: worklog.remainingEstimateSeconds
},
errorStrategy: {
handleHttp429Error: handleTempo429Error
}
});
} else {
throw e;
}
}
migratedWorklogsCount++;
if (getEnvVars(context).BATCH_TYPE === 'FAILED_WORKLOGS') {
// If migrating failed worklogs and the worklog that was successfully migrated is in the list of failed worklogs, then remove it from the list.
const index = failedWorklogs.findIndex(w => w.tempoId === worklog.tempoWorklogId);
if (index !== -1) {
failedWorklogs.splice(index, 1);
}
}
} catch (e) {
failedWorklogsCount++;
failedWorklogs.push({
tempoId: worklog.tempoWorklogId,
issueId: worklog.issue.id,
reason: `Error while migrating worklog: ${(e as Error).message}`,
date: new Date().toUTCString()
});
if (getEnvVars(context).LOG_WORKLOG_MIGRATION_ERRORS) {
console.error(`Error while migrating worklog`, e, {
worklog
}, e);
}
if (getEnvVars(context).HALT_WHEN_WORKLOG_MIGRATION_FAILS) {
throw e;
}
}
}
async function deleteWorklog(context: Context, issuePairs: IssuePair[], targetWorklog: WorklogAsResponse, sourceWorklogs: WorklogAsResponse[], index: number) {
try {
if (index < getEnvVars(context).Simulation.SIMULATED_DELETION_FAILURES) {
throw Error('Simulated target issue worklog deletion failure');
}
await TempoCloudTarget.Worklog.deleteWorklog({
id: targetWorklog.tempoWorklogId.toString(),
errorStrategy: {
handleHttp429Error: handleTempo429Error
}
});
} catch (e) {
deletionFailures++;
if (getEnvVars(context).LOG_WORKLOG_DELETION_ERRORS) {
console.error(`Failed to delete worklog with ID: ${targetWorklog.tempoWorklogId}`, e);
}
if (getEnvVars(context).HALT_WHEN_WORKLOG_DELETION_FAILS) {
throw e;
} else {
const issuePair = issuePairs.find(ip => ip.targetId === targetWorklog.issue.id);
if (!issuePair) {
throw new Error('Issue pair was not found while processing target issue worklog deletion failure');
}
// If the target issue worklog deletion failure does not halt the script, then find all the sibling worklogs of the source issue
const siblings = sourceWorklogs.filter(w => w.issue.id === issuePair.sourceId);
for (const worklog of siblings) {
const index = sourceWorklogs.findIndex(w => w.tempoWorklogId = worklog.tempoWorklogId);
if (index !== -1) {
// And remove them from the list of worklogs to be migrated
sourceWorklogs.splice(sourceWorklogs.findIndex(w => w.tempoWorklogId = worklog.tempoWorklogId), 1);
}
if (!failedWorklogs.some(w => w.tempoId === worklog.tempoWorklogId)) {
failedWorklogsCount++;
// And also push them to the list of failed worklogs
failedWorklogs.push({
tempoId: worklog.tempoWorklogId,
issueId: issuePair.sourceId,
reason: `Error while deleting target issue (${targetWorklog.issue.id}) worklog (${targetWorklog.tempoWorklogId}): ${(e as Error).message}`,
date: new Date().toUTCString()
});
}
}
}
}
}
async function addIssueMappingToCache(sourceIssues: IssueBeanAsResponse[], sourceIssueId: number) {
try {
const sourceIssue = sourceIssues.find(i => i.id === sourceIssueId.toString());
if (!sourceIssue) {
// This should not happen
throw Error(`Source issue not found from the set with ID: ${sourceIssueId}`);
}
const targetIssueId = await getTargetIssueId(sourceIssue);
if (!targetIssueId) {
throw Error(`Target issue not found for source issue ID: ${sourceIssue}`);
}
issueMappingCache[sourceIssueId] = targetIssueId;
} catch (e) {
console.error(`Error while populating issue mapping cache for source issue ID: ${sourceIssueId}`, e);
}
}
function getTargetIssueIdFromCache(sourceIssueId: number) {
const targetIssueId = issueMappingCache[sourceIssueId];
if (!targetIssueId) {
throw new Error(`Issue not found in target instance for source issue with ID: ${sourceIssueId}`);
}
return targetIssueId;
}
async function populateUserMappingCache(context, targetUsers: GetUsersResponseOK, sourceUser: ArrayElement<GetUsersResponseOK>) {
try {
if (!sourceUser.accountId) {
// This should not happen
throw Error('Source user account ID missing');
}
const targetUserAccountId = await getTargetUserAccountId(targetUsers, sourceUser);
if (!targetUserAccountId) {
throw new Error(`User not found in target instance for source user: ${sourceUser.emailAddress} (${sourceUser.accountId})`)
}
userMappingCache[sourceUser.accountId] = targetUserAccountId;
} catch (e) {
if (getEnvVars(context).LOG_USER_MAPPING_ERRORS) {
console.warn(`Error while populating user mapping cache for source user: ${sourceUser.emailAddress ?? '[Hidden Email]'} (${sourceUser.accountId})`, e);
}
if (getEnvVars(context).HALT_WHEN_USER_MAPPING_FAILS) {
throw e;
}
}
}
function getTargetUserAccountIdFromCache(sourceUserAccountId: string) {
const targetUserAccountId = userMappingCache[sourceUserAccountId];
if (!targetUserAccountId) {
throw new Error(`Target user not found with source user account ID: ${sourceUserAccountId}`);
}
return targetUserAccountId;
}
async function getUsers(instance: JiraCloudApi) {
const allUsers: GetUsersResponseOK = [];
const maxResults = 50;
let startAt = 0;
do {
const users = await instance.User.getUsers({
startAt,
maxResults,
errorStrategy: {
handleHttp429Error: handleAtlassian429Error
}
});
allUsers.push(...users);
if (users.length === maxResults) {
startAt += maxResults;
} else {
startAt = 0
}
} while (startAt > 0);
return allUsers;
}
interface IssueWorklog extends WorklogAsResponse {
readonly remainingEstimateSeconds?: number;
}
import { HttpEventRequest, HttpEventResponse, buildHTMLResponse } from '@sr-connect/generic-app/events/http';
import { RecordStorage } from '@sr-connect/record-storage';
import { getEnvVars, MigrationState, RecordStorageKeys } from './Utils';
// Script parameters, adjust them accordingly.
// const MAX_DISPLAYED_BATCHES: number | null = null; // How many latest batches to display, set to 0 to remove this section from the report, or to 'null' to list all batches.
// const MAX_DISPLAYED_FAILURES: number | null = null; // How many latest worklog migration failures to display, set to 0 to remove this section from the report, or to 'null' to list all failures.
/**
* This functions generates a summary report in HTML
**/
export default async function (event: HttpEventRequest, context: Context): Promise<HttpEventResponse> {
const storage = new RecordStorage();
const migrationState = await storage.getValue<MigrationState>(RecordStorageKeys.MIGRATION_STATE);
if (!migrationState) {
return buildHTMLResponse('No migration state found, you probably have not started a migration yet, or the the first batch is still running.');
}
return buildHTMLResponse(`
${buildSummarySection(migrationState)}
${getEnvVars(context).Report.MAX_DISPLAYED_BATCHES === 0 ? '' : buildBatchesSection(context, migrationState)}
${getEnvVars(context).Report.MAX_DISPLAYED_FAILURES === 0 || migrationState.failedWorklogs.length === 0 ?'' : buildFailedWorklogMigrationsSection(context, migrationState)}`);
}
function buildSummarySection(migrationState: MigrationState) {
const totalTimeSpentPerBatch = migrationState.batches.reduce((prev, current) => prev + current.timeSpent, 0) / 1000;
const averageBatchTime = (totalTimeSpentPerBatch / migrationState.batches.length).toFixed(2);
const worklogsPerSecondPerBatch = ((migrationState.migratedWorklogsCount + migrationState.failedWorklogs.length) / totalTimeSpentPerBatch).toFixed(1);
const totalTimeSpent = ((migrationState.endTime ?? Date.now()) - migrationState.startTime) / 1000;
const worklogsPerSecond = ((migrationState.migratedWorklogsCount + migrationState.failedWorklogs.length) / totalTimeSpent).toFixed(1);
return `
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
table, th, td {
border: 1px solid;
}
table {
border-collapse: collapse;
width: 100%;
}
td, th {
padding: 5px;
}
tr:nth-child(even) {
background-color: #f2f2f2;
}
th {
padding-top: 12px;
padding-bottom: 12px;
text-align: left;
background-color: #04AA6D;
color: white;
}
tr:hover {
background-color: #ddd;
}
</style>
<h2>Summary</h2>
<div><strong>Time Elapsed</strong>: ${(totalTimeSpent / 60).toFixed(1)} minutes</div>
<div><strong>Issues Processed</strong>: ${migrationState.issuesProcessed}</div>
<div><strong>Migrated Worklogs</strong>: ${migrationState.migratedWorklogsCount}</div>
<div><strong>Failed Worklogs</strong>: ${migrationState.failedWorklogs.length}</div>
<div><strong>Deletion Failures</strong>: ${migrationState.deletionFailures}</div>
<div><strong>ScriptRunner Connect Throttles</strong>: ${migrationState.srcThrottleCount}</div>
<div><strong>Atlassian Throttles</strong>: ${migrationState.atlassianThrottleCount}</div>
<div><strong>Tempo Throttles</strong>: ${migrationState.tempoThrottleCount}</div>
<div><strong>Batches Completed</strong>: ${migrationState.batches.length}</div>
<div><strong>Average Batch Time</strong>: ${averageBatchTime} seconds</div>
<div><strong>Worklogs per Second</strong>: ${worklogsPerSecond}</div>
<div><strong>Worklogs per Second per Batch</strong>: ${worklogsPerSecondPerBatch}</div>
<div><strong>Last Updated</strong>: ${migrationState.batches[migrationState.batches.length - 1].completionDate}</div>
<div><strong>Start Time</strong>: ${new Date(migrationState.startTime).toUTCString()}</div>
<div><strong>End Time</strong>: ${migrationState.endTime ? new Date(migrationState.endTime).toUTCString() : 'In Progress'}</div>`
}
function buildBatchesSection(context: Context, migrationState: MigrationState) {
let batches = migrationState.batches.reverse();
const batchesToDisplay = getEnvVars(context).Report.MAX_DISPLAYED_BATCHES;
if (batchesToDisplay !== -1) {
batches = batches.slice(0, batchesToDisplay);
}
return `
<h2>Batches</h2>
<table>
<thead>
<tr>
<th>#</th>
<th>Batch Type</th>
<th>Migrated Worklogs</th>
<th>Failed Worklogs</th>
<th>Deletion Failures</th>
<th>SR Connect Throttles</th>
<th>Atlassian Throttles</th>
<th>Tempo Throttles</th>
<th>Batch Time</th>
<th>Worklogs per Second</th>
<th>Completion Date</th>
</tr>
<thead>
<tbody>
${batches.map((b, i) => `
<tr>
<td>${migrationState.batches.length - i}</td>
<td>${b.batchType}</td>
<td>${b.migratedWorklogsCount}</td>
<td>${b.failedWorklogsCount}</td>
<td>${b.deletionFailures}</td>
<td>${b.srcThrottleCount}</td>
<td>${b.atlassianThrottleCount}</td>
<td>${b.tempoThrottleCount}</td>
<td>${(b.timeSpent / 1000).toFixed(2)} seconds</td>
<td>${((b.migratedWorklogsCount + b.failedWorklogsCount) / (b.timeSpent / 1000)).toFixed(1)}</td>
<td>${b.completionDate}</td>
</tr>
`).join('')}
</tbody>
</table>`;
}
function buildFailedWorklogMigrationsSection(context: Context, migrationState: MigrationState) {
let failures = migrationState.failedWorklogs.reverse();
const failuresToDisplay = getEnvVars(context).Report.MAX_DISPLAYED_FAILURES;
if (failuresToDisplay !== -1) {
failures = failures.slice(0, failuresToDisplay);
}
return `
<h2>Failed Worklogs</h2>
<table>
<thead>
<tr>
<th>#</th>
<th style="width: 200px">Tempo Source Worklog ID</th>
<th style="width: 150px">Source Issue ID</th>
<th style="width: 300px">Date</th>
<th>Reason</th>
</tr>
<thead>
<tbody>
${failures.map((w, i) => `
<tr>
<td>${migrationState.failedWorklogs.length - i}</td>
<td>${w.tempoId}</td>
<td>${w.issueId}</td>
<td>${w.date}</td>
<td>${w.reason}</td>
</tr>
`).join('')}
</tbody>
</table>`;
}
import { RecordStorage } from "@sr-connect/record-storage";
import { RecordStorageKeys } from "./Utils";
/**
* This script resets migration state, run this script if you need to restart your migration.
*/
export default async function(event: any, context: Context): Promise<void> {
const storage = new RecordStorage();
await storage.deleteValue(RecordStorageKeys.MIGRATION_STATE);
await storage.deleteValue(RecordStorageKeys.USER_MAPPING);
console.log('Migration state reset');
}
export interface ThrottleAwareResult {
throttleCount: number;
}
export type ArrayElement<A> = A extends readonly (infer T)[] ? T : never
export const RecordStorageKeys = {
MIGRATION_STATE: 'MIGRATION_STATE',
USER_MAPPING: 'USER_MAPPING'
}
export interface MigrationState {
issuesProcessed: number;
migratedWorklogsCount: number;
failedWorklogsCount: number;
srcThrottleCount: number;
atlassianThrottleCount: number;
tempoThrottleCount: number;
deletionFailures: number;
batches: MigrationBatch[],
failedWorklogs: FailedWorklog[];
startTime: number;
endTime?: number;
}
export interface FailedWorklog {
tempoId: number;
issueId?: number;
reason: string;
date: string;
}
export interface MigrationBatch {
batchType: BatchType;
migratedWorklogsCount: number;
failedWorklogsCount: number;
srcThrottleCount: number;
atlassianThrottleCount: number;
tempoThrottleCount: number;
deletionFailures: number;
timeSpent: number;
completionDate: string;
}
export interface IssuePair {
sourceId: number;
targetId: number;
}
export type BatchType = 'JQL' | 'FAILED_WORKLOGS';
interface EnvVars {
JQL: string;
BATCH_TYPE: BatchType;
FROM_DATE: string;
TO_DATE: string;
DEFAULT_AUTHOR_ACCOUNT_ID?: string;
MIGRATE_BILLABLE_SECONDS: boolean;
KEEP_REMAINING_ESTIMATE: boolean;
ATLASSIAN_API_CONCURRENCY: number;
TEMPO_API_CONCURRENCY: number;
JIRA_JQL_SEARCH_API_PAGE_SIZE: number;
TEMPO_GET_WORKLOGS_API_PAGE_SIZE: number;
VERBOSE: boolean;
DELETE_EXISTING_WORKLOGS: boolean;
HALT_WHEN_USER_MAPPING_FAILS: boolean;
LOG_USER_MAPPING_ERRORS: boolean;
HALT_WHEN_WORKLOG_DELETION_FAILS: boolean;
LOG_WORKLOG_DELETION_ERRORS: boolean;
HALT_WHEN_WORKLOG_MIGRATION_FAILS: boolean;
LOG_WORKLOG_MIGRATION_ERRORS: boolean;
Simulation: {
SIMULATED_MIGRATION_FAILURES: number;
SIMULATED_DELETION_FAILURES: number;
}
Advanced: {
BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER: number;
BATCH_CYCLE_MIN_TIME: number;
RETRY_CUTOFF_TIME: number;
}
GenerateTestData: {
PROJECT_KEY: string;
ISSUE_TYPE: string;
CONCURRENCY: number;
}
Report: {
MAX_DISPLAYED_BATCHES: number;
MAX_DISPLAYED_FAILURES: number;
}
}
export function getEnvVars(context: Context) {
return context.environment.vars as EnvVars;
}