Template Content
About the template
About ScriptRunner Connect
What is ScriptRunner Connect?
Can I try it out for free?
Yes. ScriptRunner Connect comes with a forever free tier.
Can I customize the integration logic?
Absolutely. The main value proposition of ScriptRunner Connect is that you'll get full access to the code that is powering the integration, which means you can make any changes to the the integration logic yourself.
Can I change the integration to communicate with additional apps?
Yes. Since ScriptRunner Connect specializes in enabling complex integrations, you can easily change the integration logic to connect to as many additional apps as you need, no limitations.
What if I don't feel comfortable making changes to the code?
First you can try out our AI assistant which can help you understand what the code does, and also help you make changes to the code. Alternatively you can hire our professionals to make the changes you need or build new integrations from scratch.
Do I have to host it myself?
No. ScriptRunner Connect is a fully managed SaaS (Software-as-a-Service) product.
What about security?
ScriptRunner Connect is ISO 27001 and SOC 2 certified. Learn more about our security.
Template Content
This integration migrates custom field values from Jira DC (or server) instance issues to Jira Cloud instance issues with the same issue keys. Migration logic is designed to be resumable, meaning you can abort/pause the migration process at any time and continue where you left off by rerunning the migration script. Failed migrations are recorded with the option to run the script again to re-try migrating only the failed issues.
Following custom field types are supported:
Following scripts are included:
MigrateCustomFieldValues
- Run this script to start the migration process.ResetMigration
- Run this script to reset migration.ReadMigrationState
- Run this script to see current recorded migration state.CreateDummyIssues
- Run this script if you need to generate dummy issues to test the migration. It will create issues on Jira On-Premise instance with custom field values available for predefined custom fields.Setup source Jira On-Premise and target Jira Cloud connectors.
When you run the migration script, following will happen:
RecordStorage
, if no state is found (new migration), an empty state is created.FAILED_ISSUES
mode (re-trying to migrate failed issues), then a list of unique failed issue keys is assembled with the size equal to JIRA_JQL_SEARCH_API_PAGE_SIZE
parameter.JQL
mode then the query will be the query that user specified has in JQL
parameter whilst skipping number of results that are already migrated. If the script is running in FAILED_ISSUES
mode, then a JQL query is constructed that retrieves the list of issues retrieved from the previous step.To start the migration, open MigrateCustomFieldValues
script. On top of the file, you'll see bunch of parameters, adjust them according to your needs. By minimum you have to change the JQL
parameter which determines which issues will get migrated and fill out CUSTOM_FIELDS
variable with custom fields credentials that need migration.
You can provide custom field names OR IDs, please set the corresponding value to PROVIDE_CUSTOM_FIELD_IDS
variable depending on the kind.
Once you're happy with the parameters, run the MigrateCustomFieldValues
script manually. Once the migration process is underway, you can abort at any time, by clicking on the Abort Invocation
button for the message that you received in the console when you triggered your migration script. Do not clear the console, otherwise you will lose the option to abort the process. If you accidentally cleared anyway, you will have to wait up to 15 minutes to wait for the script to restart itself at which point you'll get another message.
Bunch of information is outputted to the console while the integration is running. The amount of information that is logged out is controlled by the VERBOSE
parameter which you can enable if you want to see more information, which can be useful when tweaking the script parameters to find the settings for maximum performance.
The main performance difference comes from how many issues can be migrated concurrently. For that purpose you can tweak the number of JIRA_CLOUD_API_CONCURRENCY
parameter in MigrateCustomFieldValues
script that controls how many concurrent API calls are allowed to be fired towards it. In the console you can see how many times it throttled your API calls, use this metric to find a good balance. Overcommiting and getting throttled a lot most likely will actually slow down your migration speed.
Some issues may fail to be migrated. The reason can be due to configuration error or service error. Whatever the reason, you can always re-try failed issues. If you see failures happening, you necessarily don't have to wait until the end of migration, but can abort the process, fix the error if it is something you can do, and then either continue with the migration or re-try failed migrations and the continue with the migration.
To re-try failed issues, open MigrateCustomFieldValues
script and set BATCH_TYPE
to FAILED_ISSUES
and run the script. This will then now re-try all the failed issues. If re-tries are successful you should start seeing the reduction of issues in migration state.
ResetMigration
if you intend to restart your migration process, at which point you probably also should make sure to enable workspace deletion parameter as well.JIRA_JQL_SEARCH_API_PAGE_SIZE
parameter in MigrateCustomFieldValues
script also controls the size of the batch.import JiraCloudTarget from './api/jira/cloud';
import JiraOnPremise from "./api/jira/on-premise";
import { JiraCloudApi } from '@managed-api/jira-cloud-v3-sr-connect';
import { JiraOnPremApi } from '@managed-api/jira-on-prem-v8-sr-connect';
import { retry } from '@managed-api/commons-core';
import { ThrottleAwareResult } from './Utils';
import { buildCustomFieldsMapping, customFieldsMappingCache, CUSTOM_FIELDS } from "./MigrateCustomFieldValues";
import { GetEditIssueMetadataResponseOK } from '@managed-api/jira-on-prem-v8-core/types/issue/metadata';
// Script parameters, adjust them accordingly.
const PROJECT_KEY = 'MT'; // Name of the source and target project to generate dummy issues for.
const ISSUE_TYPE = 'Task'; // Issue type to generate dummy issues for.
const CONCURRENCY = 5; // How many generator tasks can run concurrently.
/** This script generates dummy issues on source and target instance and adds values to predefined custom fields for each source issue. Function runs up 15 minutes. */
export default async function (event: any, context: Context): Promise<void> {
// Build custom fields mapping for surce and target instance
await buildCustomFieldsMapping(CUSTOM_FIELDS);
// Create issue to retrieve metadata with available options for custom fields
const zeroIssue = await createSourceIssue(JiraOnPremise, 'Issue for metadata', {});
const metadata = await JiraOnPremise.Issue.Metadata.getEditMetadata({
issueIdOrKey: zeroIssue.issueKey ?? ''
});
// Build fields object values for source dummy issues
const updateFields = composeCustomFieldsValuesForSourceDummyIssues(metadata);
let count = 0;
while (true) {
const time = Date.now();
const issueTasks: Promise<CreateDummyIssuePairResult>[] = [];
for (let i = 0; i < CONCURRENCY; i++) {
issueTasks.push(createDummyIssuePair(count++, updateFields));
}
const results = await Promise.all(issueTasks);
console.log(`Batch done`, {
time: `${(Date.now() - time) / 1000} seconds`,
cloudThrottleCount: results.reduce((prev, current) => prev + current.cloudThrottleCount, 0),
onPremThrottleCount: results.reduce((prev, current) => prev + current.onPremThrottleCount, 0),
});
if (Date.now() > context.startTime + context.timeout - 60 * 1000) {
break;
}
}
}
async function createDummyIssuePair(count: number, valuesForSourceIssueCustomFields: Record<string, any>): Promise<CreateDummyIssuePairResult> {
try {
const summary = `Test issue ${count}`;
const [sourceIssue, targetIssue] = await Promise.all([createSourceIssue(JiraOnPremise, summary, valuesForSourceIssueCustomFields), createTargetIssue(JiraCloudTarget, summary)]);
return {
cloudThrottleCount: targetIssue.throttleCount,
onPremThrottleCount: sourceIssue.throttleCount,
}
} catch (e) {
console.error('Failed to create dummy issue', e);
}
return {
cloudThrottleCount: 0,
onPremThrottleCount: 0,
}
}
export async function createSourceIssue(instance: JiraOnPremApi, summary: string, fields: Record<string, any>): Promise<CreateIssueResult> {
let throttleCount = 0;
const project = await instance.Project.getProject({
projectIdOrKey: PROJECT_KEY,
});
const issueTypes = await instance.Issue.Type.getTypes();
const issueType = issueTypes.find(it => it.name === ISSUE_TYPE);
if (!issueType) {
throw Error('Issue Type not found in source instance');
}
const issue = await instance.Issue.createIssue({
body: {
fields: {
project: {
id: project.id ?? ''
},
issuetype: {
id: issueType.id ?? ''
},
summary,
...fields
}
},
errorStrategy: {
handleHttp429Error: (response) => {
console.log('JIRA THROTTLED', response);
throttleCount++;
return retry(1000);
}
}
});
return {
issueId: +(issue.id ?? '0'),
issueKey: issue.key,
throttleCount
}
}
export async function createTargetIssue(instance: JiraCloudApi, summary: string): Promise<CreateIssueResult> {
let throttleCount = 0;
const project = await instance.Project.getProject({
projectIdOrKey: PROJECT_KEY,
});
const issueTypes = await instance.Issue.Type.getTypesForProject({
projectId: +(project.id ?? 0)
});
const issueType = issueTypes.find(it => it.name === ISSUE_TYPE);
if (!issueType) {
throw Error('Issue Type not found');
}
const issue = await instance.Issue.createIssue({
body: {
fields: {
project: {
id: project.id ?? ''
},
issuetype: {
id: issueType.id ?? ''
},
summary
}
},
errorStrategy: {
handleHttp429Error: (response) => {
console.log('JIRA THROTTLED', response);
throttleCount++;
return retry(1000);
}
}
});
return {
issueId: +(issue.id ?? '0'),
issueKey: issue.key,
throttleCount
}
}
interface CreateDummyIssuePairResult {
cloudThrottleCount: number;
onPremThrottleCount: number;
}
interface CreateIssueResult extends ThrottleAwareResult {
issueId: number;
issueKey?: string;
}
export const composeCustomFieldsValuesForSourceDummyIssues = (metadata: GetEditIssueMetadataResponseOK) => {
const updateFields: Record<string, any> = {};
for (const field in customFieldsMappingCache) {
const fieldObj = customFieldsMappingCache[field];
switch (fieldObj.type) {
case 'cascading':
updateFields[field] = {
value: metadata.fields?.[field].allowedValues?.[0].value,
child: metadata.fields?.[field].allowedValues?.[0].children[0]?.value ? {
value: metadata.fields?.[field].allowedValues?.[0].children[0].value
} : undefined
}
break;
case 'radio':
case 'single-select':
updateFields[field] = { value: metadata.fields?.[field].allowedValues?.[0].value, }
break;
case 'checkbox':
case 'multi-select':
updateFields[field] = metadata.fields?.[field].allowedValues?.map(val => ({ value: val?.value }))
break;
default:
break;
}
}
return updateFields;
}
import JiraCloud from './api/jira/cloud';
import JiraOnPremise from "./api/jira/on-premise";
import { RecordStorage } from '@sr-connect/record-storage';
import { FailedIssue, MigrationState, BatchType, RecordStorageKeys, NotFoundIssue, CustomFieldFromUser } from './Utils';
import { retry, TooManyRequestsError } from '@managed-api/commons-core';
import { throttleAll } from 'promise-throttle-all';
import { triggerScript } from '@sr-connect/trigger';
import { CommonError as AtlassianCommonError } from '@managed-api/jira-cloud-v3-core/errorStrategy';
import { IssueResponse } from '@managed-api/jira-on-prem-v8-core/definitions/issue';
import { IssueFieldsUpdate } from '@managed-api/jira-cloud-v3-core/definitions/IssueFields';
// Script parameters, adjust them accordingly.
const JQL = 'project = MT'; // JQL query that determines which issues will get migrated.
const BATCH_TYPE: BatchType = 'JQL'; // Determines how to retrieve issues for migration. Use "JQL" for regular migration that finds source issues based on JQL expression. Use "FAILED_ISSUES" after migration when there are some issues that failed to be migrated, this option will essentially re-try to migrate failed issues.
const JIRA_CLOUD_API_CONCURRENCY = 10; // How many Atlassian API calls will be allowed to run concurrently. Tweak this setting accoring to our instance rate limits.
const JIRA_JQL_SEARCH_API_PAGE_SIZE = 50; // Limits how many issues are return from JQL per page, this also controls the size of the migration batch.
const VERBOSE = true; // Whether to log information about intermediate steps.
const HALT_WHEN_FIELD_VALUES_MIGRATION_FAILS = false; // Whether to halt the migration when field values migration fails.
const LOG_ISSUE_FIELD_VALUES_MIGRATION_ERRORS = true; // Whether to log field values migration errors to console.
// If custom filed names are provided, set this to false, set this to true if you provide custom field IDs. Please keep in mind that ids should have a prefix 'customfield_' to be processed correctly by the API, for example: 'customfield_123456'.
const PROVIDE_CUSTOM_FIELD_IDS = false;
// Custom field names or IDs on both instances along with the type. The following types are supported: 'cascading', 'radio', 'checkbox', 'multi-select', 'single-select',
export const CUSTOM_FIELDS: CustomFieldFromUser[] = [
{
sourceNameOrId: 'Migration-test-cascading',
targetNameOrId: 'Migration-test-cascading',
type: 'cascading'
},
{
sourceNameOrId: 'Migration-test-radio',
targetNameOrId: 'Migration-test-radio',
type: 'radio'
},
{
sourceNameOrId: 'Migration-test-checkbox',
targetNameOrId: 'Migration-test-checkbox',
type: 'checkbox'
},
{
sourceNameOrId: 'Migration-test-dropdown-multi',
targetNameOrId: 'Migration-test-dropdown-multi',
type: 'multi-select'
},
{
sourceNameOrId: 'Migration-test-dropdown-single',
targetNameOrId: 'Migration-test-dropdown-single',
type: 'single-select'
},
];
// Advanced parameters, don't change these unless you know what you are doing
const BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER = 3; // This multiplier is used to calculate when not to continue with the next patch. If the time left in current invocation is smaller than the previous batches average processing time multiplied with this constant then a new script invocation is triggered instead.
const BATCH_CYCLE_MIN_TIME = 120; // Minimum timespan in seconds from which point onward next batch won't be initiated, even if the average batch time with the multiplier would allow. Keep this number higher than RETRY_CUTTOF_TIME.
const RETRY_CUTOFF_TIME = 60; // Seconds left until to function invocation timeout from which point onward throttled API requests won't no longer be re-tried, issues will be marked as failures if this happens.
export const customFieldsMappingCache: Record<string, { id: string, type: CustomFieldFromUser['type'] }> = {};
let stitchItThrottleCount = 0;
let cloudThrottleCount = 0;
let migratedIssuesCount = 0;
let failedIssuesCount = 0;
let failedIssues: FailedIssue[] = [];
let notFoundIssuesCount = 0;
let notFoundIssues: NotFoundIssue[] = [];
let isTimeLeftForRetry: () => boolean;
/**
* This function migrates issue custom field values either by finding source issues by JQL expression or re-trying failed issues.
*/
export default async function (event: any, context: Context): Promise<void> {
const startTime = Date.now();
init(context);
const storage = new RecordStorage();
const migrationState = await loadMigrationState(storage, startTime);
// Build custom fields mapping for source and target instance
await buildCustomFieldsMapping(CUSTOM_FIELDS, PROVIDE_CUSTOM_FIELD_IDS);
while (true) {
resetBatchCounters(migrationState);
const batchTime = Date.now();
const issuesProcessed = await processBatchOfIssues(migrationState);
const timeSpent = Date.now() - batchTime;
console.info(`Batch of ${issuesProcessed} issues migrated. Failed issues: ${failedIssuesCount}. Not found issues: ${notFoundIssuesCount}. Stitch It throttle count: ${stitchItThrottleCount}. Atlassian throttle count: ${cloudThrottleCount}. Time spent: ${getTimeSpent(batchTime)}.`)
updateMigrationState(migrationState, issuesProcessed, timeSpent);
await storage.setValue(RecordStorageKeys.MIGRATION_STATE, migrationState);
if (issuesProcessed === 0) {
if (BATCH_TYPE === 'JQL') {
console.log('No more issues found to migrate, exiting job.');
}
if (BATCH_TYPE === 'FAILED_ISSUES') {
console.log('No more issues found to re-try migration, existing job.');
}
break;
}
if (!isTimeLeftForNextBatch(migrationState, context)) {
console.log('No time left to run a next batch of issues in this invocation, starting a new one.');
await triggerScript('MigrateCustomFieldValues');
break;
}
}
}
function init(context: Context) {
isTimeLeftForRetry = () => context.startTime + context.timeout - RETRY_CUTOFF_TIME * 1000 > Date.now();
resetErrorStrategies();
}
function resetErrorStrategies() {
// Reset global error strategies (remove default re-try logic), since we'll be defining our own re-try behaviour when getting throttled.
JiraCloud.setGlobalErrorStrategy(null);
}
function resetBatchCounters(migrationState: MigrationState) {
stitchItThrottleCount = 0;
cloudThrottleCount = 0;
migratedIssuesCount = 0;
failedIssuesCount = 0;
failedIssues = migrationState.failedIssues;
notFoundIssuesCount = 0;
notFoundIssues = migrationState.notFoundIssues;
}
function updateMigrationState(migrationState: MigrationState, issuesCount: number, timeSpent: number) {
migrationState.migratedIssuesCount += migratedIssuesCount;
migrationState.failedIssuesCount = failedIssues.length;
migrationState.notFoundIssuesCount = notFoundIssues.length;
migrationState.stitchItThrottleCount += stitchItThrottleCount;
migrationState.cloudThrottleCount += cloudThrottleCount;
migrationState.batches.push({
batchType: BATCH_TYPE,
migratedIssuesCount,
stitchItThrottleCount,
cloudThrottleCount,
timeSpent,
failedIssuesCount,
notFoundIssuesCount,
completionDate: new Date().toUTCString()
});
if (BATCH_TYPE === 'JQL') {
migrationState.issuesProcessed += issuesCount;
}
if (issuesCount === 0) {
migrationState.endTime = Date.now();
} else {
migrationState.endTime = undefined;
}
}
async function loadMigrationState(storage: RecordStorage, startTime: number) {
let migrationState = await storage.getValue<MigrationState>(RecordStorageKeys.MIGRATION_STATE);
if (!migrationState) {
migrationState = {
startTime,
issuesProcessed: 0,
migratedIssuesCount: 0,
failedIssuesCount: 0,
notFoundIssuesCount: 0,
stitchItThrottleCount: 0,
cloudThrottleCount: 0,
batches: [],
failedIssues: [],
notFoundIssues: [],
}
}
return migrationState;
}
function isTimeLeftForNextBatch(migrationState: MigrationState, context: Context) {
const batches = migrationState?.batches ?? [];
const averageBatchTime = batches.reduce((prev, current) => prev + current.timeSpent, 0) / batches.length;
let timeRequiredForNextBatch = averageBatchTime * BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER;
if (timeRequiredForNextBatch < BATCH_CYCLE_MIN_TIME * 1000) {
timeRequiredForNextBatch = BATCH_CYCLE_MIN_TIME * 1000;
};
return context.startTime + context.timeout - timeRequiredForNextBatch > Date.now();
}
async function processBatchOfIssues(migrationState: MigrationState) {
if (BATCH_TYPE === 'JQL') {
logIfVerbose(`Getting issues from source instance, skipping ${migrationState.issuesProcessed} issues...`);
}
if (BATCH_TYPE === 'FAILED_ISSUES') {
logIfVerbose(`Getting failed issues from source instance...`);
}
const sourceIssuesTime = Date.now();
const failedIssueKeys = Array.from(new Set(migrationState.failedIssues.filter(w => !!w.issueKey).map(w => w.issueKey!))).slice(0, JIRA_JQL_SEARCH_API_PAGE_SIZE);
if (BATCH_TYPE === 'FAILED_ISSUES' && failedIssueKeys.length === 0) {
return 0;
}
const issues = await getIssuesFromJQL(migrationState, failedIssueKeys);
console.log(`Found ${issues.length} issues from source instance. Time spent ${getTimeSpent(sourceIssuesTime)}. Issues: ${(issues).map(i => i.key).join(', ')}.`);
if (issues.length === 0) {
return 0;
}
await migrateCustomFieldValuesForIssues(issues);
return issues.length;
}
async function getIssuesFromJQL(migrationState: MigrationState, failedIssueKeys: string[]) {
const searchResult = await JiraOnPremise.Issue.Search.searchByJql({
body: {
jql: BATCH_TYPE !== 'FAILED_ISSUES' ? JQL : `issue in (${failedIssueKeys.join(', ')})`,
startAt: BATCH_TYPE === 'JQL' ? migrationState.issuesProcessed : undefined,
maxResults: JIRA_JQL_SEARCH_API_PAGE_SIZE,
fields: ['key', 'id', ...Object.keys(customFieldsMappingCache)]
}
});
if ((searchResult.maxResults ?? 0) < JIRA_JQL_SEARCH_API_PAGE_SIZE) {
console.warn(`JIRA_JQL_SEARCH_API_PAGE_SIZE is larger than the actual allowed maximum: ${searchResult.maxResults}`);
}
return searchResult.issues ?? [];
}
async function migrateCustomFieldValuesForIssues(sourseIssues: IssueResponse[]) {
logIfVerbose(`Copying values for ${sourseIssues.length} issues...`);
const copyFieldValuesTime = Date.now();
await throttleAll(JIRA_CLOUD_API_CONCURRENCY, sourseIssues.map((issue) => () => migrateFieldValues(issue)));
logIfVerbose(`Field values copied. Time spent: ${getTimeSpent(copyFieldValuesTime)}`);
}
function logIfVerbose(message: string) {
if (VERBOSE) {
console.log(message);
}
}
function handleAtlassian429Error(error: TooManyRequestsError<AtlassianCommonError>) {
// TODO: follow Retry-After and other Atlassian specific headers: https://developer.atlassian.com/cloud/jira/platform/rate-limiting/
if (error.response.headers.has('x-stitch-rate-limit')) {
stitchItThrottleCount++;
} else {
cloudThrottleCount++;
}
if (!isTimeLeftForRetry()) {
throw Error('No time left for re-trying throttled Atlassian API call');
}
return retry(1000);
}
function getTimeSpent(startTime: number) {
return `${((Date.now() - startTime) / 1000).toFixed(2)} seconds`;
}
async function migrateFieldValues(issue: IssueResponse) {
try {
const fieldsForUpdate = composeFieldsForIssueUpdate(issue);
const existingIssue = await JiraCloud.Issue.getIssue({
issueIdOrKey: issue.key ?? '',
fields: ['key'],
errorStrategy: {
handleHttp404Error: () => false,
}
});
if (!existingIssue) {
notFoundIssuesCount++;
notFoundIssues.push({
issueKey: issue.key ?? '',
date: new Date().toUTCString()
});
return;
}
await JiraCloud.Issue.editIssue({
issueIdOrKey: issue.key ?? '',
body: {
fields: fieldsForUpdate
},
errorStrategy: {
handleHttp429Error: handleAtlassian429Error
}
});
migratedIssuesCount++;
if (BATCH_TYPE === 'FAILED_ISSUES') {
// If migrating failed values and the issue that was successfully migrated is in the list of failed issues, then remove it from the list.
const index = failedIssues.findIndex(w => w.issueKey === issue.key);
if (index !== -1) {
failedIssues.splice(index, 1);
}
}
} catch (e) {
failedIssuesCount++;
failedIssues.push({
issueKey: issue.key ?? '',
reason: `Error while migrating issue: ${(e as Error).message}`,
date: new Date().toUTCString()
});
if (LOG_ISSUE_FIELD_VALUES_MIGRATION_ERRORS) {
console.error(`Error while migrating issue`, e, {
issue
}, e);
}
if (HALT_WHEN_FIELD_VALUES_MIGRATION_FAILS) {
throw e;
}
}
}
export const composeFieldsForIssueUpdate = (sourceIssueResponse: IssueResponse) => {
const sourceIssueFieldValues = sourceIssueResponse.fields;
const updateFields: IssueFieldsUpdate = {};
for (const field in sourceIssueFieldValues) {
const target = customFieldsMappingCache[field];
if (!sourceIssueFieldValues[field]) {
continue;
}
switch (target.type) {
case 'cascading':
updateFields[target.id] = {
value: sourceIssueFieldValues[field]?.value,
child: sourceIssueFieldValues[field]?.child?.value ? {
value: sourceIssueFieldValues[field]?.child?.value
} : undefined
}
break;
case 'radio':
case 'single-select':
updateFields[target.id] = { value: sourceIssueFieldValues[field]?.value, }
break;
case 'checkbox':
case 'multi-select':
updateFields[target.id] = sourceIssueFieldValues[field].map((val: any) => ({ value: val?.value }))
break;
default:
break;
}
}
return updateFields;
}
export const buildCustomFieldsMapping = async (fields: CustomFieldFromUser[], idsPassed: boolean = false) => {
if (idsPassed) {
fields.forEach(f => {
customFieldsMappingCache[f.sourceNameOrId] = {
id: f.targetNameOrId,
type: f.type
}
});
} else {
try {
const sourceFields = await JiraOnPremise.Issue.Field.Custom.getFields();
const targetFields = await JiraCloud.Issue.Field.getFields();
fields.forEach(f => {
const sourceFieldId = sourceFields.values?.find(sf => sf.name === f.sourceNameOrId)?.id;
const targetFieldId = targetFields.find(tf => tf.name === f.targetNameOrId)?.id;
if (!sourceFieldId) {
throw new Error(`ID for the field ${f.sourceNameOrId} not found at source instance.`);
}
if (!targetFieldId) {
throw new Error(`ID for the field ${f.targetNameOrId} not found at target instance.`);
}
customFieldsMappingCache[sourceFieldId] = {
id: targetFieldId,
type: f.type
}
});
} catch (e) {
throw new Error(`Error while mapping custom fields: ${e}`);
}
}
}
import { RecordStorage } from "@sr-connect/record-storage";
import { RecordStorageKeys } from "./Utils";
export default async function (event: any, context: Context): Promise<void> {
const storage = new RecordStorage();
const migrationState = await storage.getValue(RecordStorageKeys.MIGRATION_STATE);
console.log('Current recorded migration state:', migrationState);
}
import { RecordStorage } from "@sr-connect/record-storage";
import { RecordStorageKeys } from "./Utils";
/**
* This script resets migration state, run this script if you need to restart your migration.
*/
export default async function(event: any, context: Context): Promise<void> {
const storage = new RecordStorage();
await storage.deleteValue(RecordStorageKeys.MIGRATION_STATE);
console.log('Migration state reset');
}
export interface CustomFieldFromUser {
sourceNameOrId: string;
targetNameOrId: string;
type: 'cascading' | 'radio' | 'checkbox' | 'multi-select' | 'single-select';
}
export interface ThrottleAwareResult {
throttleCount: number;
}
export type ArrayElement<A> = A extends readonly (infer T)[] ? T : never
export const RecordStorageKeys = {
MIGRATION_STATE: 'MIGRATION_STATE'
}
export interface MigrationState {
issuesProcessed: number;
migratedIssuesCount: number;
failedIssuesCount: number;
notFoundIssuesCount: number;
stitchItThrottleCount: number;
cloudThrottleCount: number;
batches: MigrationBatch[],
failedIssues: FailedIssue[];
notFoundIssues: NotFoundIssue[];
startTime: number;
endTime?: number;
}
export interface FailedIssue {
issueKey: string;
reason: string;
date: string;
}
export interface NotFoundIssue {
issueKey: string;
date: string;
}
export interface MigrationBatch {
batchType: BatchType;
migratedIssuesCount: number;
failedIssuesCount: number;
notFoundIssuesCount: number;
stitchItThrottleCount: number;
cloudThrottleCount: number;
timeSpent: number;
completionDate: string;
}
export type BatchType = 'JQL' | 'FAILED_ISSUES';