Migrate XRay Test Step attachments from one instance to another


Intro video is not displayed because you have disallowed functional cookies.
Get Started

Not the template you're looking for? Browse more.

About the template


Migrate XRay Test Step attachments from one Jira Cloud instance to another. Get started to learn more.

About ScriptRunner Connect


What is ScriptRunner Connect?

ScriptRunner Connect is an AI assisted code-first (JavaScript/TypeScript) integration platform (iPaaS) for building complex integrations and automations.

Can I try it out for free?

Yes. ScriptRunner Connect comes with a forever free tier.

Can I customize the integration logic?

Absolutely. The main value proposition of ScriptRunner Connect is that you'll get full access to the code that is powering the integration, which means you can make any changes to the the integration logic yourself.

Can I change the integration to communicate with additional apps?

Yes. Since ScriptRunner Connect specializes in enabling complex integrations, you can easily change the integration logic to connect to as many additional apps as you need, no limitations.

What if I don't feel comfortable making changes to the code?

First you can try out our AI assistant which can help you understand what the code does, and also help you make changes to the code. Alternatively you can hire our professionals to make the changes you need or build new integrations from scratch.

Do I have to host it myself?

No. ScriptRunner Connect is a fully managed SaaS (Software-as-a-Service) product.

What about security?

ScriptRunner Connect is ISO 27001 and SOC 2 certified. Learn more about our security.

Template Content


README

Scripts

TypeScriptGenerateDummyTests
TypeScriptGetMigrationReport
Sync HTTP Event

README


XRay Test Step Attachments Migration

📋 Overview

This integration migrates XRay test step attachments from a source instance to a target instance.

This template was developed with the assistance of an AI agent.

⚠️ Important: We strongly recommend upgrading to a paid ScriptRunner Connect plan before running large migrations. The rate limits in the free plan are too restrictive for meaningful progress, though the free plan can be used for testing purposes.

Note: All times displayed in reports and logs are in UTC timezone.

Key Features:

  • Long-running migrations: Automatically handles migrations that take longer than 15 minutes by checkpointing and resuming
  • Batch processing: Processes attachments in configurable batches with concurrency control
  • Failure tracking: Tracks failed attachments with detailed error information for retry
  • Progress reporting: Real-time HTML report showing migration progress, statistics, and failures
  • Retry mechanism: Retry previously failed attachments without re-migrating successful ones
  • Graceful stopping: Stop migration mid-process and resume later
  • Rate limit handling: Automatically handles API rate limiting with intelligent retry logic

How it works:

  1. Fetches test steps with attachments from the source XRay instance using a JQL query
  2. Resolves issue IDs from source to target using Jira Cloud APIs
  3. Matches test steps between source and target instances
  4. Downloads attachments from source and uploads them to corresponding test steps in the target instance
  5. Tracks success/failure for each attachment
  6. Automatically resumes from checkpoints if migration exceeds time limits
  7. Provides real-time progress reporting via web interface

🖊️ Setup

Connectors

Configure the following Connectors in ScriptRunner Connect:

  • Jira Cloud (Source): Connector for the source Jira Cloud instance (used to resolve issue IDs)
  • Jira Cloud (Target): Connector for the target Jira Cloud instance (used to resolve issue IDs)

For detailed connector setup instructions, refer to the ScriptRunner Connect web UI.

API Connections

Configure the following API Connections in your workspace:

  • Jira Cloud Source: Link to the Jira Cloud (Source) Connector
    • Import path: ./api/jira/cloud/source
  • Jira Cloud Target: Link to the Jira Cloud (Target) Connector
    • Import path: ./api/jira/cloud/target

Event Listeners

Configure one Generic Event Listener:

  • GetMigrationReport: Sync HTTP Event Listener
    • Script: GetMigrationReport
    • Purpose: Provides real-time HTML report of migration progress
    • Access the report by navigating to the configured URL path

Parameters

Configure the following Parameters in the ScriptRunner Connect web UI:

Required Parameters

  • JQL (Text): JQL query to search issues for finding all tests, their steps, and attachments in each step
  • PAGE_SIZE (Number): Number of attachments to process per batch (recommended: 20-50, best not to exceed 100 unless you can negotiate much higher rate limits for your instance from XRay)
    • ⚠️ Important: With larger PAGE_SIZE values, the delay between requesting a migration stop and when it actually stops increases, as more items need to be processed in the current batch.
  • API_CONCURRENCY (Number): Maximum concurrent XRay API calls (recommended: 3-5, unless high rate limits can be negotiated)
    • ⚠️ Warning: High concurrency settings can trigger rate limiting or errors from XRay's side. If you encounter frequent rate limiting or errors, try reducing this value.
  • REPLY_FAILURES (Boolean): When enabled, retries previously failed attachments instead of migrating new ones
  • HALT_WHEN_ATTACHMENT_MIGRATION_FAILS (Boolean): When enabled, stops migration immediately on first failure
  • VERBOSE (Boolean): Enable detailed logging for debugging

XRay API Parameters (XRayAPI folder)

  • BASE_URL (Text): Base URL of XRay API instance
  • SOURCE_CLIENT_ID (Text): XRay source instance client ID
  • SOURCE_CLIENT_SECRET (Password): XRay source instance client secret
  • TARGET_CLIENT_ID (Text): XRay target instance client ID
  • TARGET_CLIENT_SECRET (Password): XRay target instance client secret

Reporting Parameters (Report folder)

  • MAX_DISPLAYED_BATCHES (Number): Maximum batches to show in report (-1 for all, 0 to hide section)
  • MAX_DISPLAYED_FAILURES (Number): Maximum failures to show in report (-1 for all, 0 to hide section)

Advanced Parameters (Advanced folder)

⚠️ Warning: Only modify these if you understand the implications.

  • BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER (Number): Multiplier for average batch time when calculating if there's time for next batch
  • BATCH_CYCLE_MIN_TIME (Number): Minimum seconds to reserve before starting next batch
  • RETRY_CUTOFF_TIME (Number): Seconds before timeout to stop retrying rate-limited requests

Simulation Parameters (Simulation folder)

Development/testing only - Not needed for production migrations:

  • RESTART_SCRIPT_AFTER_FIRST_BATCH (Boolean): Force script restart after first batch (for testing)
  • PERCENTAGE_OF_SIMULATED_FAILURES (Number): Percentage (0-1) of attachments to fail on purpose (for testing)

🚀 Using the Migration

Starting a Migration

  1. Configure Parameters: Set JQL, PAGE_SIZE, API_CONCURRENCY, and XRay API credentials based on your migration needs
  2. Run Migration: Execute the RunMigration script manually from the ScriptRunner Connect web UI
  3. Monitor Progress: Access the migration report via the Generic Event Listener URL to view real-time progress

Viewing Migration Progress

Navigate to the Generic Event Listener URL configured for GetMigrationReport to view:

  • Migration status (Running, Completed, or Stopped)
  • Total attachments migrated and failed
  • Batch processing statistics
  • Detailed failure information
  • Throttle counts (rate limiting events)

⚠️ Important: Refresh the reporting page regularly to see the latest migration updates. The page does not auto-refresh.

Stopping a Migration

⚠️ CRITICAL: Do NOT stop the migration script directly from the ScriptRunner Connect web UI. This will cause the currently running batch to be processed again, leading to duplicate attachments in the target instance.

Correct procedure:

  1. Access Report: Navigate to the migration report page
  2. Click Stop: Click the "Stop Migration" button on the reporting page
  3. Wait for Completion: The migration will complete the current batch and then stop gracefully
  4. Resume Later: Run RunMigration script again to resume

⚠️ Important: After stopping the migration, do not refresh the page using the browser's refresh button. Browsers may re-trigger the POST request, which could stop the migration again if it was restarted in the meantime. If you need to reload the page, manually navigate to the URL instead of using the refresh button (browsers usually warn you about this beforehand).

⚠️ Warning: Clearing the console logs removes the abort option. If the console is cleared, wait 15 minutes for the script to restart (and you'll receive a new kickoff message and abort option).

Retrying Failed Attachments

⚠️ CRITICAL: When switching to retry mode, you MUST run the ResetCursor script BEFORE running RunMigration script. This is important - no attachments will be retried until the cursor position has been reset.

  1. Review Failures: Check the migration report to see which attachments failed
  2. Enable Retry Mode: Set REPLY_FAILURES parameter to true
  3. Reset Cursor: Run ResetCursor script first - this resets the retry position to the beginning
  4. Run Migration: Execute RunMigration script
  5. Monitor Progress: The migration will retry only failed attachments, removing successful retries from the failure list

Resetting Retry Position Mid-Way: The ResetCursor script can also be used to reset the retry position mid-way through a retry cycle. For example, if you notice an issue and don't wish to wait until the end, you can:

  1. Stop the migration using the "Stop Migration" button on the reporting page
  2. Run the ResetCursor script to reset the cursor position
  3. Run RunMigration again to start retrying from the beginning of the failed attachments list

Resetting Migration State

To start a fresh migration:

  1. Run the ResetMigration script
  2. This clears all migration state and allows you to start from the beginning

Utility Scripts

  • RunMigration: Main migration script
  • GetMigrationReport: Generates HTML progress report (accessed via Generic Event Listener)
  • ResetMigration: Clears migration state to start fresh
  • ResetCursor: Resets pagination cursor position
  • GenerateDummyTests: Generates test data in both source and target instances, but without attachments (for testing)
  • WipeAuth: Clears cached authentication tokens

❗️ Considerations

Rate Limiting

  • The migration automatically handles rate limiting from both XRay API and ScriptRunner Connect
  • Adjust API_CONCURRENCY if you encounter frequent rate limiting
  • Rate limit events are tracked and displayed in the migration report
  • Rate-limited requests are retried infinitely until they succeed or time runs out

Large Migrations

  • Migrations exceeding 15 minutes automatically checkpoint and resume
  • No manual intervention required - the script triggers itself to continue
  • Migration state is preserved between invocations
  • Migration Duration Limit: Migrations are allowed to run up to 50 hours (maximum 200 automatic resumptions). After this point, the migration needs to be restarted manually by running RunMigration again

Failed Attachments

  • Failed attachments are tracked with detailed error messages and timestamps
  • Common failure reasons include:
    • Permission errors (user doesn't have permission to upload attachments)
    • Missing test steps or issues in target instance
    • Invalid attachment data
    • Network errors or API failures
  • Enable retry mode to retry failed attachments after resolving underlying issues

Data Consistency

  • Attachments are migrated with all their properties (filename, MIME type, content)
  • Test steps are matched between source and target instances by issue ID and step index
  • Issue IDs are automatically resolved from source to target using Jira Cloud APIs
  • The migration preserves attachment relationships and metadata

Performance

  • Batch size (PAGE_SIZE) affects migration speed and memory usage
    • Larger batch sizes increase the delay before a stop request takes effect
    • Recommended: 20-50 attachments per batch, best not to exceed 100 unless higher rate limits are negotiated
  • Higher concurrency (API_CONCURRENCY) speeds up migration but may trigger rate limits or errors from XRay
  • Recommended starting values: PAGE_SIZE: 20-50, API_CONCURRENCY: 3-5
  • Adjust based on your instance's rate limits and performance characteristics

Limitations

  • Memory Constraints: When attachments are larger, increasing API_CONCURRENCY can run into a risk of crashing the ScriptRunner Connect runtime due to too many attachments being held in memory at the same time. If this happens, reduce the API_CONCURRENCY setting.

🔧 Troubleshooting

Migration Stuck or Not Progressing

  1. Check Script Invocation Logs: Review logs in ScriptRunner Connect web UI for errors
  2. Check Migration Report: View the report to see current status and any failures
  3. Verify Parameters: Ensure JQL query is set correctly and returns test issues
  4. Check Rate Limiting: Review throttle counts in the report - high counts indicate rate limiting issues

High Failure Rate

  1. Review Failure Reasons: Check the failures table in the migration report for specific error messages
  2. Common Issues:
    • Permission errors: Ensure users have permission to upload attachments in target instance
    • Missing test steps: Verify all test steps exist in target instance with matching step indices
    • Missing issues: Verify all referenced issues exist in target instance
    • Invalid attachment data: Check that attachments are valid and accessible
  3. Enable Verbose Logging: Set VERBOSE: true for detailed error information

Rate Limiting Issues

  1. Reduce Concurrency: Lower API_CONCURRENCY parameter
  2. Check Throttle Counts: Monitor throttle counts in the migration report
  3. Wait and Retry: The migration automatically retries rate-limited requests infinitely until they succeed or time runs out

Report Not Accessible

  1. Wait for First Batch: The report becomes available after the first batch has completed. If you access the report URL before the first batch completes, you will see a "Migration not running" message.
  2. Verify Event Listener: Ensure Generic Event Listener is configured and active
  3. Check URL Path: Verify you're accessing the correct URL path configured for the Event Listener
  4. Check Permissions: Ensure you have access to the workspace

Attachments Not Found

  1. Verify JQL Query: Check that JQL query includes test issues with attachments
  2. Check Source Instance: Verify attachments exist in the source instance for the specified JQL query
  3. Review Filters: Ensure no additional filters are excluding the attachments

API Connections


TypeScriptGenerateDummyTests

import { RecordStorage } from '@sr-connect/record-storage';
import { throttleAll } from 'promise-throttle-all';
import {
    negotiateAuth,
    createTest,
    type CreateTestTypeInput,
    type CreateTestJiraInput,
    type CreateTestStepInput,
} from './Utils/XRay';

const SOURCE_AUTH_TOKEN_KEY = 'xray-source-auth-token';
const TARGET_AUTH_TOKEN_KEY = 'xray-target-auth-token';

/**
 * Generates a single test data object based on Test.ts file
 */
function generateTestData(projectKey: string): {
    testType: CreateTestTypeInput;
    jira: CreateTestJiraInput;
    steps: CreateTestStepInput[];
} {
    return {
        testType: { name: 'Manual' },
        jira: {
            fields: { summary: 'Test', project: { key: projectKey } },
        },
        steps: [
            {
                action: 'ACTION',
                data: 'DATA',
                result: 'RESULT',
                attachments: [
                    {
                        filename: 'test.txt',
                        mimeType: 'text/plain',
                        data: 'SEVMTE8gV09STEQhISE=',
                    },
                ],
            },
        ],
    };
}

// eslint-disable-next-line @typescript-eslint/no-explicit-any
export default async function (event: any, context: Context<EV>): Promise<void> {
    console.log('Generating dummy tests in source and target instances...');

    const storage = new RecordStorage();
    const baseUrl = context.environment.vars.XRayAPI.BASE_URL;
    const projectKey = context.environment.vars.Generator.PROJECT_KEY;
    const apiConcurrency = context.environment.vars.API_CONCURRENCY;

    // Negotiate auth for source instance
    const sourceAuthToken = await negotiateAuth(
        storage,
        SOURCE_AUTH_TOKEN_KEY,
        baseUrl,
        context.environment.vars.XRayAPI.SOURCE_CLIENT_ID,
        context.environment.vars.XRayAPI.SOURCE_CLIENT_SECRET,
        'source',
        context,
    );

    // Negotiate auth for target instance
    const targetAuthToken = await negotiateAuth(
        storage,
        TARGET_AUTH_TOKEN_KEY,
        baseUrl,
        context.environment.vars.XRayAPI.TARGET_CLIENT_ID,
        context.environment.vars.XRayAPI.TARGET_CLIENT_SECRET,
        'target',
        context,
    );

    console.log('Authentication successful for both instances');
    console.log(`Starting infinite test generation with concurrency limit of ${apiConcurrency}`);

    // Create steps without attachments for target instance
    const createStepsWithoutAttachments = (steps: CreateTestStepInput[]): CreateTestStepInput[] => {
        return steps.map((step) => ({
            action: step.action,
            data: step.data,
            result: step.result,
            // Explicitly omit attachments
        }));
    };

    // Infinite loop to generate tests continuously
    while (true) {
        // Create a batch of tasks up to API_CONCURRENCY limit
        const batchTasks: Array<() => Promise<unknown>> = [];

        for (let i = 0; i < apiConcurrency; i++) {
            const testData = generateTestData(projectKey);

            // Create a task that creates both source and target tests in parallel
            batchTasks.push(async () => {
                const [sourceResult, targetResult] = await Promise.all([
                    createTest(context, baseUrl, sourceAuthToken, testData.testType, testData.jira, testData.steps),
                    createTest(
                        context,
                        baseUrl,
                        targetAuthToken,
                        testData.testType,
                        testData.jira,
                        createStepsWithoutAttachments(testData.steps),
                    ),
                ]);

                if (context.environment.vars.VERBOSE) {
                    console.log(
                        `Created test pair - Source issueId: ${sourceResult.test?.issueId}, Target issueId: ${targetResult.test?.issueId}`,
                    );
                }

                return {
                    sourceIssueId: sourceResult.test?.issueId,
                    targetIssueId: targetResult.test?.issueId,
                };
            });
        }

        // Execute batch with concurrency limit
        await throttleAll(apiConcurrency, batchTasks);
    }
}
TypeScriptGetMigrationReport

import { RecordStorage } from '@sr-connect/record-storage';
import { HttpEventRequest, HttpEventResponse, buildHTMLResponse, isText } from '@sr-connect/generic-app/events/http';
import type { MigrationState } from './Utils/Types';

const MIGRATION_STATE_KEY = 'xray-attachment-migration-state';
const MIGRATION_STOP_REQUEST_KEY = 'migration-stop-request';

export default async function (event: HttpEventRequest, context: Context<EV>): Promise<HttpEventResponse> {
    const storage = new RecordStorage();

    // Check if stop migration was requested via POST
    if (event.method === 'POST' && isText(event)) {
        const body = event.body as string;
        // Check if the Stop Migration button was pressed (form contains action=stop)
        if (body && body.includes('action=stop')) {
            // Store stop request in Record Storage (idempotent - will overwrite)
            await storage.setValue(MIGRATION_STOP_REQUEST_KEY, { requested: true, timestamp: Date.now() });
        }
    }

    // Load migration state
    const migrationState = await storage.getValue<MigrationState>(MIGRATION_STATE_KEY);
    const stopRequest = await storage.getValue<{ requested: boolean }>(MIGRATION_STOP_REQUEST_KEY);

    if (!migrationState) {
        return buildHTMLResponse(generateHTML('Migration not running', null, false, context));
    }

    const html = generateHTML(
        'XRay Attachment Migration Report',
        migrationState,
        stopRequest?.requested === true,
        context,
    );
    return buildHTMLResponse(html);
}

/**
 * Generates HTML report page
 */
function generateHTML(
    title: string,
    migrationState: MigrationState | null,
    isStopped: boolean,
    context: Context<EV>,
): string {
    if (!migrationState) {
        return `
<!DOCTYPE html>
<html>
<head>
    <title>${title}</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; background-color: #f5f5f5; }
        .container { max-width: 1200px; margin: 0 auto; background-color: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); }
        h1 { color: #333; border-bottom: 2px solid #0065ff; padding-bottom: 10px; }
        .status { padding: 10px; background-color: #f0f0f0; border-radius: 4px; }
    </style>
</head>
<body>
    <div class="container">
        <h1>${title}</h1>
        <div class="status">
            <p>Migration is not currently running. Start the migration by running the RunMigration script.</p>
        </div>
    </div>
</body>
</html>`;
    }

    const status = migrationState.endTime
        ? { text: 'Completed', class: 'status-completed' }
        : isStopped
            ? { text: 'Stopped', class: 'status-stopped' }
            : { text: 'Running', class: 'status-running' };

    const timeElapsed = Date.now() - migrationState.startTime;
    const timeElapsedSeconds = Math.floor(timeElapsed / 1000);
    const timeElapsedFormatted = formatDuration(timeElapsedSeconds);

    const totalFailed = migrationState.failedItems.length;
    const batches = migrationState.batches || [];
    const averageBatchTime =
        batches.length > 0 ? batches.reduce((sum, batch) => sum + batch.timeSpent, 0) / batches.length : 0;
    const averageBatchTimeSeconds = averageBatchTime / 1000;

    const totalBatchTime = batches.reduce((sum, batch) => sum + batch.timeSpent, 0);
    const totalBatchTimeSeconds = totalBatchTime / 1000;

    const itemsPerSecond =
        migrationState.totalMigrated > 0 && timeElapsedSeconds > 0
            ? (migrationState.totalMigrated / timeElapsedSeconds).toFixed(2)
            : '0.00';

    const itemsPerSecondPerBatch =
        migrationState.totalMigrated > 0 && totalBatchTimeSeconds > 0
            ? (migrationState.totalMigrated / totalBatchTimeSeconds).toFixed(2)
            : '0.00';

    const maxDisplayedBatches = context.environment.vars.REPORT.MAX_DISPLAYED_BATCHES;
    const maxDisplayedFailures = context.environment.vars.REPORT.MAX_DISPLAYED_FAILURES;

    const displayedBatches =
        maxDisplayedBatches === -1
            ? [...batches].reverse()
            : maxDisplayedBatches === 0
                ? []
                : [...batches].reverse().slice(0, maxDisplayedBatches);

    const displayedFailures =
        maxDisplayedFailures === -1
            ? [...migrationState.failedItems].reverse()
            : maxDisplayedFailures === 0
                ? []
                : [...migrationState.failedItems].reverse().slice(0, maxDisplayedFailures);

    const xrayThrottleCount = migrationState.throttleCounts.xray || 0;
    const scriptRunnerConnectThrottleCount = migrationState.throttleCounts.scriptRunnerConnect || 0;

    return `
<!DOCTYPE html>
<html>
<head>
    <title>${title}</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; background-color: #f5f5f5; }
        .container { max-width: 1200px; margin: 0 auto; background-color: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); }
        h1 { color: #333; border-bottom: 2px solid #0065ff; padding-bottom: 10px; }
        h2 { color: #555; margin-top: 30px; border-bottom: 1px solid #ddd; padding-bottom: 5px; }
        .summary { display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 15px; margin: 20px 0; }
        .summary-item { background-color: #f9f9f9; padding: 15px; border-radius: 4px; border-left: 4px solid #0065ff; }
        .summary-label { font-weight: bold; color: #666; font-size: 0.9em; margin-bottom: 5px; }
        .summary-value { font-size: 1.2em; color: #333; }
        table { width: 100%; border-collapse: collapse; margin: 20px 0; }
        th, td { padding: 12px; text-align: left; border-bottom: 1px solid #ddd; }
        th { background-color: #0065ff; color: white; font-weight: bold; }
        tr:hover { background-color: #f5f5f5; }
        .status-running { color: #0065ff; font-weight: bold; }
        .status-completed { color: #28a745; font-weight: bold; }
        .status-stopped { color: #dc3545; font-weight: bold; }
        .no-data { color: #999; font-style: italic; text-align: center; padding: 20px; }
        .stop-banner { background-color: #d4edda; border: 1px solid #c3e6cb; color: #155724; padding: 15px; border-radius: 4px; margin-bottom: 20px; }
        .stop-button { background-color: #dc3545; color: white; border: none; padding: 10px 20px; border-radius: 4px; cursor: pointer; font-size: 1em; margin-top: 20px; }
        .stop-button:hover { background-color: #c82333; }
        .stop-button:disabled { background-color: #6c757d; cursor: not-allowed; }
    </style>
</head>
<body>
    <div class="container">
        <h1>${title}</h1>
        
        ${isStopped ? '<div class="stop-banner">Migration has been stopped. It will remain stopped until you restart the migration. The actual migration script will stop after the current batch is completed.</div>' : ''}

        ${
            !migrationState.endTime && !isStopped
                ? `
        <form method="post" style="display: inline;">
            <input type="hidden" name="action" value="stop">
            <button type="submit" class="stop-button">Stop Migration</button>
        </form>
        `
                : ''
        }

        <h2>Summary</h2>
        <div class="summary">
            <div class="summary-item">
                <div class="summary-label">Status</div>
                <div class="summary-value ${status.class}">${status.text}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Time Elapsed</div>
                <div class="summary-value">${timeElapsedFormatted}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Migrated Attachments</div>
                <div class="summary-value">${migrationState.totalMigrated}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Failed Attachments</div>
                <div class="summary-value">${totalFailed}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">XRay Throttle Count</div>
                <div class="summary-value">${xrayThrottleCount}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">ScriptRunner Connect Throttle Count</div>
                <div class="summary-value">${scriptRunnerConnectThrottleCount}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Batches Completed</div>
                <div class="summary-value">${migrationState.batchesCompleted}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Average Batch Processing Time</div>
                <div class="summary-value">${averageBatchTimeSeconds.toFixed(2)}s</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Attachments per Second</div>
                <div class="summary-value">${itemsPerSecond}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Attachments per Second per Batch</div>
                <div class="summary-value">${itemsPerSecondPerBatch}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Last Updated</div>
                <div class="summary-value">${formatTimestampUTC(migrationState.lastUpdated)}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">Start Time</div>
                <div class="summary-value">${formatTimestampUTC(migrationState.startTime)}</div>
            </div>
            <div class="summary-item">
                <div class="summary-label">End Time</div>
                <div class="summary-value">${migrationState.endTime ? formatTimestampUTC(migrationState.endTime) : 'N/A'}</div>
            </div>
        </div>

        ${
            displayedBatches.length > 0
                ? `
        <h2>Batches</h2>
        <table>
            <thead>
                <tr>
                    <th>#</th>
                    <th>Batch Type</th>
                    <th>Migrated Attachments</th>
                    <th>Failed Attachments</th>
                    <th>XRay Throttle Count</th>
                    <th>ScriptRunner Connect Throttle Count</th>
                    <th>Batch Time</th>
                    <th>Attachments per Second</th>
                    <th>Completion Time</th>
                </tr>
            </thead>
            <tbody>
                ${displayedBatches
                    .map(
                        (batch) => `
                <tr>
                    <td>${batch.batchNumber}</td>
                    <td>${batch.batchType}</td>
                    <td>${batch.migrated}</td>
                    <td>${batch.failed}</td>
                    <td>${batch.throttleCounts.xray || 0}</td>
                    <td>${batch.throttleCounts.scriptRunnerConnect || 0}</td>
                    <td>${(batch.timeSpent / 1000).toFixed(2)}s</td>
                    <td>${batch.timeSpent > 0 ? (batch.migrated / (batch.timeSpent / 1000)).toFixed(2) : '0.00'}</td>
                    <td>${formatTimestampUTC(batch.completionTime)}</td>
                </tr>
                `,
                    )
                    .join('')}
            </tbody>
        </table>
        `
                : '<p class="no-data">No batches to display</p>'
        }

        ${
            displayedFailures.length > 0
                ? `
        <h2>Failures</h2>
        <table>
            <thead>
                <tr>
                    <th>#</th>
                    <th>Source ID</th>
                    <th>Filename</th>
                    <th>Source Issue Key</th>
                    <th>Time</th>
                    <th>Failure Reason</th>
                </tr>
            </thead>
            <tbody>
                ${displayedFailures
                    .map(
                        (failure) => `
                <tr>
                    <td>${failure.entryNumber}</td>
                    <td>${failure.sourceId}</td>
                    <td>${failure.filename || 'N/A'}</td>
                    <td>${failure.sourceIssueKey || 'N/A'}</td>
                    <td>${formatTimestampUTC(failure.timestamp)}</td>
                    <td>${escapeHtml(failure.reason)}</td>
                </tr>
                `,
                    )
                    .join('')}
            </tbody>
        </table>
        `
                : '<p class="no-data">No failures to display</p>'
        }
    </div>
</body>
</html>`;
}

/**
 * Formats duration in seconds to human-readable string
 */
function formatDuration(seconds: number): string {
    const days = Math.floor(seconds / 86400);
    const hours = Math.floor((seconds % 86400) / 3600);
    const minutes = Math.floor((seconds % 3600) / 60);
    const secs = seconds % 60;

    const parts: string[] = [];
    if (days > 0) parts.push(`${days}d`);
    if (hours > 0) parts.push(`${hours}h`);
    if (minutes > 0) parts.push(`${minutes}m`);
    if (secs > 0 || parts.length === 0) parts.push(`${secs}s`);

    return parts.join(' ');
}

/**
 * Formats timestamp as UTC string
 */
function formatTimestampUTC(timestamp: number): string {
    return new Date(timestamp).toUTCString().replace('GMT', 'UTC');
}

/**
 * Escapes HTML special characters
 */
function escapeHtml(text: string): string {
    const map: Record<string, string> = {
        '&': '&amp;',
        '<': '&lt;',
        '>': '&gt;',
        '"': '&quot;',
        "'": '&#039;',
    };
    return text.replace(/[&<>"']/g, (m) => map[m]);
}
TypeScriptResetCursor

import { RecordStorage } from '@sr-connect/record-storage';
import type { MigrationState } from './Utils/Types';

const MIGRATION_STATE_KEY = 'xray-attachment-migration-state';

/**
 * Resets the pagination state which needs to be executed prior to switching over to re-trying failed migrations.
 */
// eslint-disable-next-line @typescript-eslint/no-unused-vars
export default async function (event: unknown, context: Context<EV>): Promise<void> {
    const storage = new RecordStorage();

    // Load migration state
    const migrationState = await storage.getValue<MigrationState>(MIGRATION_STATE_KEY);

    if (!migrationState) {
        console.log('No migration state found. Nothing to reset.');
        return;
    }

    // Reset pagination state to initial values
    migrationState.paginationState = {
        offset: 0,
        hasMore: true,
    };

    migrationState.lastUpdated = Date.now();

    // Save updated state
    await storage.setValue(MIGRATION_STATE_KEY, migrationState);

    console.log('Pagination state has been reset successfully');
    console.log('Migration will now start from the beginning, allowing failed items to be retried.');
}
TypeScriptResetMigration

import { RecordStorage } from '@sr-connect/record-storage';

const MIGRATION_STATE_KEY = 'xray-attachment-migration-state';
const MIGRATION_STOP_REQUEST_KEY = 'migration-stop-request';

// eslint-disable-next-line @typescript-eslint/no-explicit-any
export default async function (event: any, context: Context<EV>): Promise<void> {
    const storage = new RecordStorage();

    // Delete migration state
    await storage.deleteValue(MIGRATION_STATE_KEY);
    console.log('Migration state cleared');

    // Delete stop request
    await storage.deleteValue(MIGRATION_STOP_REQUEST_KEY);
    console.log('Stop request cleared');

    console.log('Migration reset completed successfully');
}
TypeScriptRunMigration

import { RecordStorage } from '@sr-connect/record-storage';
import { triggerScript } from '@sr-connect/trigger';
import { convertBufferToBase64 } from '@sr-connect/convert';
import { parse } from 'file-type-mime';
import { retry } from '@managed-api/commons-core';
import type { TooManyRequestsError } from '@managed-api/commons-core';
import { throttleAll } from 'promise-throttle-all';
import JiraCloudSource from './api/jira/cloud/source';
import JiraCloudTarget from './api/jira/cloud/target';
import {
    negotiateAuth,
    getAllAttachmentsForTestSteps,
    getAllTestSteps,
    updateTestStep,
    fetchWithRetry,
} from './Utils/XRay';
import type { FlattenedAttachment, MigrationState, BatchState, FailedItem } from './Utils/Types';

const SOURCE_AUTH_TOKEN_KEY = 'xray-source-auth-token';
const TARGET_AUTH_TOKEN_KEY = 'xray-target-auth-token';
const MIGRATION_STATE_KEY = 'xray-attachment-migration-state';
const MIGRATION_STOP_REQUEST_KEY = 'migration-stop-request';

// eslint-disable-next-line @typescript-eslint/no-explicit-any
export default async function (event: any, context: Context<EV>): Promise<void> {
    const storage = new RecordStorage();

    // Load migration state
    const migrationState = await loadMigrationState(storage, context);

    // Clear stop request at start
    await storage.deleteValue(MIGRATION_STOP_REQUEST_KEY);

    // Configure error handling for Managed APIs (will be configured per batch)
    // Initial setup with empty throttle counts (will be reconfigured per batch)
    const initialThrottleCounts: { scriptRunnerConnect?: number; xray?: number } = {};
    configureErrorHandling(context, initialThrottleCounts);

    // Negotiate auth for both instances
    const baseUrl = context.environment.vars.XRayAPI.BASE_URL;
    const sourceAuthToken = await negotiateAuth(
        storage,
        SOURCE_AUTH_TOKEN_KEY,
        baseUrl,
        context.environment.vars.XRayAPI.SOURCE_CLIENT_ID,
        context.environment.vars.XRayAPI.SOURCE_CLIENT_SECRET,
        'source',
        context,
    );
    const targetAuthToken = await negotiateAuth(
        storage,
        TARGET_AUTH_TOKEN_KEY,
        baseUrl,
        context.environment.vars.XRayAPI.TARGET_CLIENT_ID,
        context.environment.vars.XRayAPI.TARGET_CLIENT_SECRET,
        'target',
        context,
    );

    console.log('Authentication successful for both instances');

    // Create a clone of failed items before processing failed items
    // This clone will be used to process failed items, removing items as we process them
    // Start from the cursor position (offset) if it exists
    let clonedFailedItems: FailedItem[] | undefined;
    if (context.environment.vars.REPLY_FAILURES) {
        const failedItems = migrationState.failedItems ?? [];
        const offset = migrationState.paginationState?.offset ?? 0;
        // Start from the offset position to respect cursor position
        clonedFailedItems = failedItems.slice(offset);
    }

    // Main processing loop
    while (true) {
        // Check stop request
        if (await checkStopRequest(storage)) {
            console.log('Migration halted due to user request');
            await saveMigrationState(storage, migrationState);
            break;
        }

        // Check if we have items to process when retrying failures
        if (context.environment.vars.REPLY_FAILURES) {
            if (!clonedFailedItems || clonedFailedItems.length === 0) {
                // Retry cycle has finished - set endTime (migration is complete unless restarted)
                // There may still be new failures, but the retry cycle is done
                migrationState.endTime = Date.now();
                await saveMigrationState(storage, migrationState);
                console.log('Retry cycle completed - migration finished');
                break;
            }
        }

        // Initialize batch
        const batchState = initializeBatch(migrationState, context);

        // Configure error handling for this batch with batchState.throttleCounts
        configureErrorHandling(context, batchState.throttleCounts);

        // Process batch
        const hasMore = await processBatch(
            context,
            migrationState,
            batchState,
            sourceAuthToken,
            targetAuthToken,
            baseUrl,
            clonedFailedItems,
        );

        // Update pagination state
        if (!context.environment.vars.REPLY_FAILURES) {
            const currentOffset = migrationState.paginationState?.offset || 0;
            migrationState.paginationState = {
                offset: currentOffset + batchState.migrated + batchState.failed,
                hasMore,
            };
        }

        // Update migration state
        await updateMigrationState(storage, context, migrationState, batchState);

        // Check completion
        if (checkCompletion(context, migrationState, clonedFailedItems)) {
            migrationState.endTime = Date.now();
            await saveMigrationState(storage, migrationState);
            console.log('Migration completed successfully');
            break;
        }

        // Check time remaining
        if (!isTimeLeftForNextBatch(context, migrationState)) {
            // Adjust cursor position if retrying failures
            if (context.environment.vars.REPLY_FAILURES && batchState.lastFailedItemId) {
                const lastFailedIndex = migrationState.failedItems.findIndex(
                    (item) => item.sourceId === batchState.lastFailedItemId,
                );
                if (lastFailedIndex >= 0) {
                    migrationState.paginationState = migrationState.paginationState || { hasMore: true };
                    migrationState.paginationState.offset = lastFailedIndex + 1;
                }
            }

            console.log('Time limit approaching, triggering script restart');
            await saveMigrationState(storage, migrationState);
            await triggerScript('RunMigration');
            break;
        }
    }
}

/**
 * Loads migration state from Record Storage or initializes new state
 */
async function loadMigrationState(storage: RecordStorage, context: Context<EV>): Promise<MigrationState> {
    const existingState = await storage.getValue<MigrationState>(MIGRATION_STATE_KEY);
    if (existingState) {
        // Clear endTime if retrying failures
        if (existingState.endTime && context.environment.vars.REPLY_FAILURES) {
            delete existingState.endTime;
        }
        return existingState;
    }

    // Initialize new migration state
    return {
        startTime: Date.now(),
        lastUpdated: Date.now(),
        totalMigrated: 0,
        batchesCompleted: 0,
        throttleCounts: {},
        batches: [],
        failedItems: [],
        paginationState: {
            hasMore: true,
        },
    };
}

/**
 * Configures error handling for Managed APIs
 * @param context - Script context
 * @param throttleCounts - Throttle counts tracker to update on rate limit errors
 */
function configureErrorHandling(
    context: Context<EV>,
    throttleCounts: { scriptRunnerConnect?: number; xray?: number },
): void {
    // eslint-disable-next-line @typescript-eslint/no-explicit-any
    const customErrorStrategy = (builder: any) => {
        // eslint-disable-next-line @typescript-eslint/no-unused-vars
        return builder.http429Error((error: TooManyRequestsError<unknown>, attempt: number) => {
            // Check rate limit source
            const isScriptRunnerConnectRateLimit = error.response.headers.has('x-stitch-rate-limit');
            if (isScriptRunnerConnectRateLimit) {
                throttleCounts.scriptRunnerConnect = (throttleCounts.scriptRunnerConnect || 0) + 1;
            } else {
                throttleCounts.xray = (throttleCounts.xray || 0) + 1;
            }

            // Check time availability
            const timeLeft =
                context.startTime + context.timeout - context.environment.vars.Advanced.RETRY_CUTOFF_TIME * 1000;
            const hasTime = timeLeft > Date.now();

            if (!hasTime) {
                throw new Error('Insufficient time remaining for retry');
            }

            // Extract retry-after header
            const retryAfter =
                error.response.headers.get('retry-after') || error.response.headers.get('Retry-After') || '1000';
            return retry(parseInt(retryAfter, 10));
        });
    };

    JiraCloudSource.setGlobalErrorStrategy(customErrorStrategy);
    JiraCloudTarget.setGlobalErrorStrategy(customErrorStrategy);
}

/**
 * Checks if stop request exists in Record Storage
 */
async function checkStopRequest(storage: RecordStorage): Promise<boolean> {
    const stopRequest = await storage.getValue<{ requested: boolean }>(MIGRATION_STOP_REQUEST_KEY);
    return stopRequest?.requested === true;
}

/**
 * Initializes batch state
 */
function initializeBatch(migrationState: MigrationState, context: Context<EV>): BatchState {
    return {
        batchNumber: migrationState.batchesCompleted + 1,
        batchType: context.environment.vars.REPLY_FAILURES ? 'RETRY_FAILURES' : 'MIGRATE_ITEMS',
        migrated: 0,
        failed: 0,
        throttleCounts: {},
        timeSpent: 0,
        completionTime: 0,
        processedRetrySourceIds: [],
        successfullyRetriedSourceIds: [],
        lastFailedItemId: undefined,
    };
}

/**
 * Extracts error message from error object
 */
function extractErrorMessage(error: unknown): string {
    if (error instanceof Error) {
        return error.message;
    }
    if (typeof error === 'string') {
        return error;
    }
    return JSON.stringify(error);
}

/**
 * Records a failed attachment in the migration state
 * Updates existing failed item if it exists, otherwise creates a new one
 * @param migrationState - Migration state to update
 * @param sourceAttachment - The attachment that failed
 * @param error - The error that occurred
 * @param batchState - Optional batch state to update failure counts
 */
function recordFailedAttachment(
    migrationState: MigrationState,
    sourceAttachment: FlattenedAttachment,
    error: unknown,
    batchState?: BatchState,
): void {
    const errorMessage = extractErrorMessage(error);
    const existingFailedItemIndex = migrationState.failedItems.findIndex(
        (item) => item.sourceId === sourceAttachment.id,
    );

    if (existingFailedItemIndex >= 0) {
        // Update existing failed item
        migrationState.failedItems[existingFailedItemIndex] = {
            ...migrationState.failedItems[existingFailedItemIndex],
            timestamp: Date.now(),
            reason: errorMessage,
        };
    } else {
        // Add new failed item
        migrationState.failedItems.push({
            entryNumber: migrationState.failedItems.length + 1,
            sourceId: sourceAttachment.id,
            timestamp: Date.now(),
            reason: errorMessage,
            filename: sourceAttachment.filename,
            sourceIssueKey: sourceAttachment.sourceIssueKey,
            sourceIssueId: sourceAttachment.sourceIssueId,
            targetIssueId: sourceAttachment.targetIssueId,
            stepIndex: sourceAttachment.stepIndex,
            downloadLink: sourceAttachment.downloadLink,
        });
    }

    // Update batch state if provided
    if (batchState) {
        batchState.failed++;
        batchState.lastFailedItemId = sourceAttachment.id;
    }
}

/**
 * Processes a batch of attachments
 */
async function processBatch(
    context: Context<EV>,
    migrationState: MigrationState,
    batchState: BatchState,
    sourceAuthToken: string,
    targetAuthToken: string,
    baseUrl: string,
    clonedFailedItems?: FailedItem[],
): Promise<boolean> {
    const batchStartTime = Date.now();

    // Determine batch source
    let batchItems: FlattenedAttachment[] = [];
    let hasMoreItems = true;

    if (context.environment.vars.REPLY_FAILURES) {
        // Retry failures mode - get batch from cloned list
        if (!clonedFailedItems || clonedFailedItems.length === 0) {
            hasMoreItems = false;
        } else {
            // Get batch size items from the cloned list
            const batchSize = context.environment.vars.PAGE_SIZE;
            const itemsToRetry = clonedFailedItems.slice(0, batchSize);

            // Remove the batch size from the cloned list immediately
            clonedFailedItems.splice(0, batchSize);

            // Track which items we're processing
            batchState.processedRetrySourceIds = itemsToRetry.map((item) => item.sourceId);

            try {
                // Reconstruct FlattenedAttachment objects from failed items
                // We need to re-fetch attachments to get current download links (they may expire)
                // But we can use stored metadata to match them
                // Note: For retry mode, we fetch all attachments (no pagination) to find the failed ones
                const result = await getAllAttachmentsForTestSteps(
                    context,
                    sourceAuthToken,
                    undefined,
                    batchState.throttleCounts,
                );
                batchItems = itemsToRetry
                    .map((failedItem) => {
                        const attachment = result.attachments.find((att) => att.id === failedItem.sourceId);
                        if (attachment) {
                            return attachment;
                        }
                        // If attachment not found in current fetch, reconstruct from stored data
                        // This handles the case where the attachment might have been deleted
                        if (failedItem.downloadLink && failedItem.sourceIssueId !== undefined) {
                            return {
                                id: failedItem.sourceId,
                                filename: failedItem.filename || 'unknown',
                                storedInJira: true,
                                downloadLink: failedItem.downloadLink,
                                sourceIssueId: failedItem.sourceIssueId,
                                sourceIssueKey: failedItem.sourceIssueKey || '',
                                targetIssueId: failedItem.targetIssueId || null,
                                stepIndex: failedItem.stepIndex || 0,
                            } as FlattenedAttachment;
                        }
                        return null;
                    })
                    .filter((item): item is FlattenedAttachment => item !== null);
            } catch (error) {
                console.error('Error fetching failed attachments for retry:', error);
                // If API call fails, add items back to the front of cloned list so they can be retried
                clonedFailedItems.unshift(...itemsToRetry);
                return false;
            }
        }
    } else {
        // Regular migration mode - fetch next batch with pagination
        const currentOffset = migrationState.paginationState?.offset || 0;
        const result = await getAllAttachmentsForTestSteps(
            context,
            sourceAuthToken,
            currentOffset,
            batchState.throttleCounts,
        );
        batchItems = result.attachments;
        hasMoreItems = result.hasMore;
    }

    if (batchItems.length === 0 && !hasMoreItems) {
        return false;
    }

    // Extract unique target issue IDs from batch items
    // For retry failures mode, if targetIssueId is not set, try to look it up by sourceIssueKey
    const targetIssueIdsSet = new Set<string>();
    for (const item of batchItems) {
        if (item.targetIssueId) {
            targetIssueIdsSet.add(item.targetIssueId);
        } else if (context.environment.vars.REPLY_FAILURES && item.sourceIssueKey) {
            // Try to look up target issue ID by source issue key
            try {
                const targetIssue = await JiraCloudTarget.Issue.getIssue({
                    issueIdOrKey: item.sourceIssueKey,
                    fields: ['id'],
                });
                if (targetIssue.id) {
                    targetIssueIdsSet.add(targetIssue.id);
                    // Update the item with the found target issue ID
                    item.targetIssueId = targetIssue.id;
                }
            } catch (error) {
                console.error(`Failed to fetch target issue ${item.sourceIssueKey}:`, error);
                // Record as failed - add back to queue
                recordFailedAttachment(migrationState, item, error, batchState);
            }
        }
    }

    const targetIssueIds = Array.from(targetIssueIdsSet);
    const targetSteps = await getAllTestSteps(context, targetAuthToken, targetIssueIds, batchState.throttleCounts);
    console.log(`Loaded ${targetSteps.length} target steps for ${targetIssueIds.length} target issues`);

    // Match attachments with target steps
    const matchResult = matchAttachmentsWithSteps(batchItems, targetSteps, migrationState, batchState, context);

    // Create migration tasks for concurrent execution
    const migrationTasks = matchResult.matched.map(({ sourceAttachment, targetStepId }) => async () => {
        try {
            // Simulate failures if configured
            if (context.environment.vars.Simulation?.PERCENTAGE_OF_SIMULATED_FAILURES) {
                const randomValue = Math.random();
                if (randomValue <= context.environment.vars.Simulation.PERCENTAGE_OF_SIMULATED_FAILURES) {
                    throw new Error('Simulated failure');
                }
            }

            // Migrate attachment
            await migrateAttachment(
                sourceAttachment,
                targetStepId,
                sourceAuthToken,
                targetAuthToken,
                baseUrl,
                context,
                batchState,
            );

            batchState.migrated++;
            if (context.environment.vars.REPLY_FAILURES) {
                batchState.successfullyRetriedSourceIds.push(sourceAttachment.id);
            }
        } catch (error) {
            recordFailedAttachment(migrationState, sourceAttachment, error, batchState);

            if (context.environment.vars.HALT_WHEN_ATTACHMENT_MIGRATION_FAILS) {
                throw error;
            }

            if (context.environment.vars.VERBOSE) {
                const errorMessage = extractErrorMessage(error);
                console.error(`Failed to migrate attachment ${sourceAttachment.filename}:`, errorMessage);
            }
        }
    });

    // Execute migration tasks concurrently with concurrency limit
    await throttleAll(context.environment.vars.API_CONCURRENCY, migrationTasks);

    batchState.timeSpent = Date.now() - batchStartTime;
    batchState.completionTime = Date.now();

    return hasMoreItems;
}

/**
 * Matches source attachments with target steps
 * Catches errors and adds failed attachments to the failed items list for retry later
 */
function matchAttachmentsWithSteps(
    sourceAttachments: FlattenedAttachment[],
    targetSteps: Array<{ stepId: string; issueId: string; stepIndex: number }>,
    migrationState: MigrationState,
    batchState: BatchState,
    context: Context<EV>,
): {
    matched: Array<{ sourceAttachment: FlattenedAttachment; targetStepId: string }>;
    failed: FlattenedAttachment[];
} {
    const matched: Array<{ sourceAttachment: FlattenedAttachment; targetStepId: string }> = [];
    const failed: FlattenedAttachment[] = [];

    for (const sourceAttachment of sourceAttachments) {
        try {
            if (!sourceAttachment.targetIssueId) {
                throw new Error(
                    `No target issue ID found for attachment ${sourceAttachment.id}. Source: issueId=${sourceAttachment.sourceIssueId}, issueKey=${sourceAttachment.sourceIssueKey}`,
                );
            }

            const matchingStep = targetSteps.find(
                (step) =>
                    step.issueId === sourceAttachment.targetIssueId && step.stepIndex === sourceAttachment.stepIndex,
            );

            if (!matchingStep) {
                throw new Error(
                    `No matching step found for attachment ${sourceAttachment.id}. Source: issueId=${sourceAttachment.sourceIssueId}, stepIndex=${sourceAttachment.stepIndex}, targetIssueId=${sourceAttachment.targetIssueId}`,
                );
            }

            matched.push({
                sourceAttachment,
                targetStepId: matchingStep.stepId,
            });
        } catch (error) {
            // Catch matching errors and add to failed items list
            failed.push(sourceAttachment);
            recordFailedAttachment(migrationState, sourceAttachment, error, batchState);

            if (context.environment.vars.VERBOSE) {
                const errorMessage = extractErrorMessage(error);
                console.error(`Failed to match attachment ${sourceAttachment.filename}:`, errorMessage);
            }

            // If halt on failure is enabled, throw the error to stop migration
            if (context.environment.vars.HALT_WHEN_ATTACHMENT_MIGRATION_FAILS) {
                throw error;
            }
        }
    }

    return { matched, failed };
}

/**
 * Migrates a single attachment from source to target
 */
async function migrateAttachment(
    sourceAttachment: FlattenedAttachment,
    targetStepId: string,
    sourceAuthToken: string,
    targetAuthToken: string,
    baseUrl: string,
    context: Context<EV>,
    batchState: BatchState,
): Promise<void> {
    // Download attachment content from source with retry logic
    const downloadResponse = await fetchWithRetry(
        sourceAttachment.downloadLink,
        {
            headers: {
                Authorization: `Bearer ${sourceAuthToken}`,
            },
        },
        context,
        batchState.throttleCounts,
    );

    if (!downloadResponse.ok) {
        throw new Error(`Failed to download attachment: ${downloadResponse.status} ${downloadResponse.statusText}`);
    }

    // Convert array buffer to base64
    const arrayBuffer = await downloadResponse.arrayBuffer();

    // Detect MIME type from file content
    const fileTypeResult = parse(arrayBuffer);
    const mimeType = fileTypeResult?.mime || 'application/octet-stream';

    const base64Content = convertBufferToBase64(arrayBuffer);

    // Upload attachment to target step
    await updateTestStep(
        baseUrl,
        targetAuthToken,
        targetStepId,
        base64Content,
        sourceAttachment.filename,
        mimeType,
        context,
        batchState.throttleCounts,
    );
}

/**
 * Updates migration state with batch results and saves to Record Storage
 */
async function updateMigrationState(
    storage: RecordStorage,
    context: Context<EV>,
    migrationState: MigrationState,
    batchState: BatchState,
): Promise<void> {
    migrationState.totalMigrated += batchState.migrated;
    migrationState.batchesCompleted++;

    // Update throttle counts
    if (batchState.throttleCounts.scriptRunnerConnect) {
        migrationState.throttleCounts.scriptRunnerConnect =
            (migrationState.throttleCounts.scriptRunnerConnect || 0) + batchState.throttleCounts.scriptRunnerConnect;
    }

    if (batchState.throttleCounts.xray) {
        migrationState.throttleCounts.xray = (migrationState.throttleCounts.xray || 0) + batchState.throttleCounts.xray;
    }

    // Remove successfully retried items
    if (context.environment.vars.REPLY_FAILURES && batchState.successfullyRetriedSourceIds.length > 0) {
        migrationState.failedItems = migrationState.failedItems.filter(
            (item) => !batchState.successfullyRetriedSourceIds.includes(item.sourceId),
        );
    }

    // Add batch result
    migrationState.batches.push({
        batchNumber: batchState.batchNumber,
        batchType: batchState.batchType,
        migrated: batchState.migrated,
        failed: batchState.failed,
        throttleCounts: batchState.throttleCounts,
        timeSpent: batchState.timeSpent,
        completionTime: batchState.completionTime,
    });

    migrationState.lastUpdated = Date.now();

    // Update pagination state will be handled in processBatch based on API response

    // Save migration state to Record Storage
    await saveMigrationState(storage, migrationState);

    // Log batch completion
    console.log(
        `Batch ${batchState.batchNumber} completed: ${batchState.migrated} migrated, ${batchState.failed} failed, ${(batchState.timeSpent / 1000).toFixed(2)}s`,
    );
}

/**
 * Checks if migration is complete
 */
function checkCompletion(
    context: Context<EV>,
    migrationState: MigrationState,
    clonedFailedItems?: FailedItem[],
): boolean {
    if (context.environment.vars.REPLY_FAILURES) {
        // Check if there are more failures to retry in the cloned list
        // The cloned list is what we're actually processing from
        const remainingInClone = clonedFailedItems?.length ?? 0;

        // If clone is empty, retry cycle has finished - set endTime
        // Migration is considered complete unless restarted, even if there are new failures
        if (remainingInClone <= 0) {
            return true;
        }
        return false;
    } else {
        // Check if pagination has reached end
        return migrationState.paginationState?.hasMore === false;
    }
}

/**
 * Checks if there's time left for the next batch
 */
function isTimeLeftForNextBatch(context: Context<EV>, migrationState: MigrationState): boolean {
    // Special case: If RESTART_SCRIPT_AFTER_FIRST_BATCH is enabled, restart after the first batch
    if (context.environment.vars.Simulation?.RESTART_SCRIPT_AFTER_FIRST_BATCH) {
        return false;
    }

    const batches = migrationState?.batches ?? [];

    // Calculate average batch time
    const averageBatchTime =
        batches.length > 0 ? batches.reduce((prev, current) => prev + current.timeSpent, 0) / batches.length : 0;

    // Calculate required time with multiplier
    let timeRequiredForNextBatch =
        averageBatchTime * context.environment.vars.Advanced.BATCH_CYCLE_CUTOFF_TIME_MULTIPLIER;

    // Enforce minimum time requirement
    if (timeRequiredForNextBatch < context.environment.vars.Advanced.BATCH_CYCLE_MIN_TIME * 1000) {
        timeRequiredForNextBatch = context.environment.vars.Advanced.BATCH_CYCLE_MIN_TIME * 1000;
    }

    // Check if time remains
    return context.startTime + context.timeout - timeRequiredForNextBatch > Date.now();
}

/**
 * Saves migration state to Record Storage
 */
async function saveMigrationState(storage: RecordStorage, migrationState: MigrationState): Promise<void> {
    await storage.setValue(MIGRATION_STATE_KEY, migrationState);
}
TypeScriptUtils/Types

/**
 * Shared TypeScript types for the XRay test step attachments migration project
 */

/**
 * Optional throttle counts tracker for batch state
 */
export interface ThrottleCounts {
    scriptRunnerConnect?: number;
    xray?: number;
}

/**
 * Flattened attachment with additional metadata
 */
export interface FlattenedAttachment {
    // Original attachment fields
    id: string;
    filename: string;
    storedInJira: boolean;
    downloadLink: string;
    // Additional metadata
    sourceIssueId: string;
    sourceIssueKey: string;
    targetIssueId: string | null;
    stepIndex: number;
}

/**
 * Flattened test step with metadata
 */
export interface FlattenedTestStep {
    stepId: string;
    issueId: string;
    stepIndex: number;
}

/**
 * Migration state interface
 */
export interface MigrationState {
    startTime: number;
    endTime?: number;
    lastUpdated: number;
    paginationState?: {
        offset?: number;
        hasMore: boolean;
    };
    totalMigrated: number;
    batchesCompleted: number;
    throttleCounts: ThrottleCounts;
    batches: BatchResult[];
    failedItems: FailedItem[];
}

/**
 * Batch result interface
 */
export interface BatchResult {
    batchNumber: number;
    batchType: 'MIGRATE_ITEMS' | 'RETRY_FAILURES';
    migrated: number;
    failed: number;
    throttleCounts: ThrottleCounts;
    timeSpent: number;
    completionTime: number;
}

/**
 * Failed item interface
 */
export interface FailedItem {
    entryNumber: number;
    sourceId: string;
    timestamp: number;
    reason: string;
    filename?: string;
    sourceIssueKey?: string;
    sourceIssueId?: string;
    targetIssueId?: string | null;
    stepIndex?: number;
    downloadLink?: string;
}

/**
 * Batch state interface
 */
export interface BatchState {
    batchNumber: number;
    batchType: 'MIGRATE_ITEMS' | 'RETRY_FAILURES';
    migrated: number;
    failed: number;
    throttleCounts: ThrottleCounts;
    timeSpent: number;
    completionTime: number;
    processedRetrySourceIds: string[];
    successfullyRetriedSourceIds: string[];
    lastFailedItemId?: string;
}
TypeScriptUtils/XRay

import { query, mutation } from 'gql-query-builder';
import JiraCloudSource from '../api/jira/cloud/source';
import JiraCloudTarget from '../api/jira/cloud/target';
import type { ThrottleCounts, FlattenedAttachment, FlattenedTestStep } from './Types';

/**
 * XRay API utility functions
 */

/**
 * Fetches with retry logic for rate limiting
 * @param url - URL to fetch
 * @param options - Fetch options
 * @param context - Script context containing environment variables and timing information
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Response from fetch
 */
export async function fetchWithRetry(
    url: string,
    options: RequestInit,
    context: Context<EV>,
    throttleCounts?: ThrottleCounts,
): Promise<Response> {
    while (true) {
        const response = await fetch(url, options);

        if (response.status !== 429) {
            return response;
        }

        // Check rate limit source and update throttle counts if provided
        if (throttleCounts) {
            const isScriptRunnerConnectRateLimit = response.headers.has('x-stitch-rate-limit');
            if (isScriptRunnerConnectRateLimit) {
                throttleCounts.scriptRunnerConnect = (throttleCounts.scriptRunnerConnect || 0) + 1;
            } else {
                throttleCounts.xray = (throttleCounts.xray || 0) + 1;
            }
        }

        // Check time availability
        const timeLeft =
            context.startTime + context.timeout - context.environment.vars.Advanced.RETRY_CUTOFF_TIME * 1000;
        const hasTime = timeLeft > Date.now();

        if (!hasTime) {
            throw new Error('Insufficient time remaining for retry');
        }

        // Extract retry-after header
        const retryAfter = response.headers.get('retry-after') || response.headers.get('Retry-After') || '1000';
        const delay = parseInt(retryAfter, 10);

        // Wait before retrying
        await new Promise((resolve) => setTimeout(resolve, delay));
    }
}

/**
 * Handles GraphQL response errors, extracting GraphQL errors from JSON if available
 * @param response - Fetch Response object
 * @param defaultErrorMessage - Default error message prefix (e.g., "GraphQL query failed")
 * @throws Error with GraphQL errors if available, otherwise with default HTTP error message
 */
async function handleGraphQLError(response: Response, defaultErrorMessage: string): Promise<never> {
    // Try to extract GraphQL errors from JSON response if available
    let errorMessage = `${defaultErrorMessage}: ${response.status} ${response.statusText}`;
    try {
        const errorResult = await response.json();
        if (errorResult.errors) {
            errorMessage = `GraphQL error: ${JSON.stringify(errorResult.errors)}`;
        }
    } catch {
        // Response is not JSON, use the default error message
    }
    throw new Error(errorMessage);
}

/**
 * Authenticates with XRay API and returns the authentication token
 * @param baseUrl - Base URL of XRay API instance
 * @param clientId - Client ID for authentication
 * @param clientSecret - Client secret for authentication
 * @param context - Script context containing environment variables
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Authentication token as string
 */
export async function auth(
    baseUrl: string,
    clientId: string,
    clientSecret: string,
    context: Context<EV>,
    throttleCounts?: ThrottleCounts,
): Promise<string> {
    const response = await fetchWithRetry(
        `${baseUrl}/v1/authenticate`,
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify({
                client_id: clientId,
                client_secret: clientSecret,
            }),
        },
        context,
        throttleCounts,
    );

    if (!response.ok) {
        throw new Error(`Authentication failed: ${response.status} ${response.statusText}`);
    }

    let authKey = await response.text();
    // Strip leading and trailing double quotes if present
    if (authKey.startsWith('"') && authKey.endsWith('"')) {
        authKey = authKey.slice(1, -1);
    }
    return authKey;
}

interface CachedAuthToken {
    token: string;
    expiresAt: number;
}

const TOKEN_VALIDITY_HOURS = 23;
const TOKEN_VALIDITY_MS = TOKEN_VALIDITY_HOURS * 60 * 60 * 1000;

/**
 * Negotiates authentication token with caching support
 * Checks Record Storage for cached token, fetches new one if missing or expired
 * @param storage - Record Storage instance
 * @param authTokenKey - Key to use for storing/retrieving cached token
 * @param baseUrl - Base URL of XRay API instance
 * @param clientId - Client ID for authentication
 * @param clientSecret - Client secret for authentication
 * @param instanceName - Name of the instance (for logging purposes)
 * @param context - Script context containing environment variables
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Authentication token as string
 */
export async function negotiateAuth(
    storage: {
        getValue: <T>(key: string) => Promise<T | undefined>;
        setValue: (key: string, value: unknown) => Promise<void>;
    },
    authTokenKey: string,
    baseUrl: string,
    clientId: string,
    clientSecret: string,
    instanceName: string,
    context: Context<EV>,
    throttleCounts?: ThrottleCounts,
): Promise<string> {
    // Try to get cached token from Record Storage
    const cachedToken = await storage.getValue<CachedAuthToken>(authTokenKey);
    const now = Date.now();

    if (cachedToken && cachedToken.expiresAt > now) {
        // Use cached token if it exists and hasn't expired
        console.log(`Using cached authentication token for ${instanceName}`);
        return cachedToken.token;
    }

    // Fetch new token if cache is missing or expired
    console.log(`Fetching new authentication token for ${instanceName}`);
    const authKey = await auth(baseUrl, clientId, clientSecret, context, throttleCounts);

    // Store new token with expiration timestamp (23 hours from now)
    const expiresAt = now + TOKEN_VALIDITY_MS;
    await storage.setValue(authTokenKey, {
        token: authKey,
        expiresAt,
    });
    console.log(
        `Authentication successful for ${instanceName}, auth key cached until:`,
        new Date(expiresAt).toISOString(),
    );

    return authKey;
}

/**
 * Attachment type from XRay GraphQL schema
 */
interface Attachment {
    id: string;
    filename: string;
    storedInJira: boolean;
    downloadLink: string;
}

/**
 * Step type from XRay GraphQL schema
 */
interface Step {
    id: string;
    attachments: Attachment[];
}

/**
 * Test type from XRay GraphQL schema (partial - only fields we query)
 */
interface Test {
    issueId: string;
    steps: Step[];
}

/**
 * TestResults type from XRay GraphQL schema
 * Return type for getTests GraphQL query
 */
interface TestResults {
    total: number;
    start: number;
    limit: number;
    results: Test[];
    warnings?: string[];
}

// FlattenedAttachment is exported from Type.ts

/**
 * Gets attachments for test steps using GraphQL query with pagination support
 * @param context - Script context containing environment variables
 * @param sourceAuthToken - Authentication token for source XRay API instance
 * @param offset - Optional offset for pagination (defaults to 0)
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Object containing flattened array of attachments and hasMore flag
 */
export async function getAllAttachmentsForTestSteps(
    context: Context<EV>,
    sourceAuthToken: string,
    offset?: number,
    throttleCounts?: ThrottleCounts,
): Promise<{ attachments: FlattenedAttachment[]; hasMore: boolean }> {
    const baseUrl = context.environment.vars.XRayAPI.BASE_URL;
    const jql = context.environment.vars.JQL;
    const pageSize = context.environment.vars.PAGE_SIZE;
    const startOffset = offset ?? 0;

    // Build GraphQL query using gql-query-builder
    const graphqlQuery = query({
        operation: 'getTests',
        variables: {
            jql: { value: jql, required: true },
            limit: { value: pageSize, required: true, type: 'Int' },
            start: { value: startOffset, required: false, type: 'Int' },
        },
        fields: [
            'total',
            'limit',
            {
                results: [
                    'issueId',
                    {
                        steps: [
                            'id',
                            {
                                attachments: ['id', 'filename', 'storedInJira', 'downloadLink'],
                            },
                        ],
                    },
                ],
            },
        ],
    });

    const response = await fetchWithRetry(
        `${baseUrl}/v2/graphql`,
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                Authorization: `Bearer ${sourceAuthToken}`,
            },
            body: JSON.stringify({
                query: graphqlQuery.query,
                variables: graphqlQuery.variables,
            }),
        },
        context,
        throttleCounts,
    );

    if (!response.ok) {
        await handleGraphQLError(response, 'GraphQL query failed');
    }

    const result = await response.json();

    if (result.errors) {
        throw new Error(`GraphQL errors: ${JSON.stringify(result.errors)}`);
    }

    const testResults: TestResults = result.data.getTests;

    // Flat map attachments with additional metadata
    const flattenedAttachments: FlattenedAttachment[] = [];

    for (const test of testResults.results) {
        // Fetch source issue key from Jira using issueId
        let sourceIssueKey: string;
        try {
            const sourceIssue = await JiraCloudSource.Issue.getIssue({
                issueIdOrKey: test.issueId,
                fields: ['key'],
            });
            sourceIssueKey = sourceIssue.key ?? '';
        } catch (error) {
            console.error(`Failed to fetch source issue ${test.issueId}:`, error);
            sourceIssueKey = '';
        }

        // Fetch target issue ID using source key from target instance
        let targetIssueId: string | null = null;
        if (sourceIssueKey) {
            try {
                const targetIssue = await JiraCloudTarget.Issue.getIssue({
                    issueIdOrKey: sourceIssueKey,
                    fields: ['id'],
                });
                targetIssueId = targetIssue.id ?? null;
            } catch (error) {
                console.error(`Failed to fetch target issue ${sourceIssueKey}:`, error);
                targetIssueId = null;
            }
        }

        // Process each step's attachments
        for (let stepIndex = 0; stepIndex < test.steps.length; stepIndex++) {
            const step = test.steps[stepIndex];
            for (const attachment of step.attachments) {
                flattenedAttachments.push({
                    ...attachment,
                    sourceIssueId: test.issueId,
                    sourceIssueKey,
                    targetIssueId,
                    stepIndex,
                });
            }
        }
    }

    // Determine if there are more items
    const hasMore = startOffset + flattenedAttachments.length < testResults.total;

    return { attachments: flattenedAttachments, hasMore };
}

// FlattenedTestStep is exported from Type.ts

/**
 * Gets test steps from target instance by issue IDs using GraphQL query
 * @param context - Script context containing environment variables
 * @param targetAuthToken - Authentication token for target XRay API instance
 * @param targetIssueIds - Array of target issue IDs to query
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Flattened array of test steps
 */
export async function getAllTestSteps(
    context: Context<EV>,
    targetAuthToken: string,
    targetIssueIds: string[],
    throttleCounts?: ThrottleCounts,
): Promise<FlattenedTestStep[]> {
    const baseUrl = context.environment.vars.XRayAPI.BASE_URL;

    if (targetIssueIds.length === 0) {
        return [];
    }

    // Build GraphQL query using gql-query-builder
    const graphqlQuery = query({
        operation: 'getTests',
        variables: {
            issueIds: { value: targetIssueIds, required: true, type: '[String]' },
            limit: { value: targetIssueIds.length, required: true, type: 'Int' },
        },
        fields: [
            {
                results: [
                    'issueId',
                    {
                        steps: ['id'],
                    },
                ],
            },
        ],
    });

    const response = await fetchWithRetry(
        `${baseUrl}/v2/graphql`,
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                Authorization: `Bearer ${targetAuthToken}`,
            },
            body: JSON.stringify({
                query: graphqlQuery.query,
                variables: graphqlQuery.variables,
            }),
        },
        context,
        throttleCounts,
    );

    if (!response.ok) {
        await handleGraphQLError(response, 'GraphQL query failed');
    }

    const result = await response.json();

    if (result.errors) {
        throw new Error(`GraphQL errors: ${JSON.stringify(result.errors)}`);
    }

    const testResults: TestResults = result.data.getTests;

    // Flat map steps with metadata
    const flattenedSteps: FlattenedTestStep[] = [];

    for (const test of testResults.results) {
        // Process each step
        for (let stepIndex = 0; stepIndex < test.steps.length; stepIndex++) {
            const step = test.steps[stepIndex];
            flattenedSteps.push({
                stepId: step.id,
                issueId: test.issueId,
                stepIndex,
            });
        }
    }

    return flattenedSteps;
}

/**
 * Update test step result type from XRay GraphQL schema
 */
interface UpdateTestStepResult {
    addedAttachments?: string[];
    removedAttachments?: string[];
    warnings?: string[];
}

/**
 * Updates a test step with attachment using GraphQL mutation
 * @param baseUrl - Base URL of XRay API instance
 * @param targetAuthToken - Authentication token for target XRay API instance
 * @param stepId - ID of the step to update
 * @param attachmentBase64 - Base64 encoded attachment content
 * @param filename - Filename for the attachment
 * @param mimeType - MIME type of the attachment
 * @param context - Script context containing environment variables
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Update test step result
 */
export async function updateTestStep(
    baseUrl: string,
    targetAuthToken: string,
    stepId: string,
    attachmentBase64: string,
    filename: string,
    mimeType: string,
    context: Context<EV>,
    throttleCounts?: ThrottleCounts,
): Promise<UpdateTestStepResult> {
    // Build GraphQL mutation using gql-query-builder
    const graphqlMutation = mutation({
        operation: 'updateTestStep',
        variables: {
            stepId: { value: stepId, required: true, type: 'String' },
            step: {
                value: {
                    attachments: {
                        add: [
                            {
                                filename: filename,
                                data: attachmentBase64,
                                mimeType: mimeType,
                            },
                        ],
                    },
                },
                required: true,
                type: 'UpdateStepInput',
            },
        },
        fields: ['addedAttachments', 'removedAttachments', 'warnings'],
    });

    const response = await fetchWithRetry(
        `${baseUrl}/v2/graphql`,
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                Authorization: `Bearer ${targetAuthToken}`,
            },
            body: JSON.stringify({
                query: graphqlMutation.query,
                variables: graphqlMutation.variables,
            }),
        },
        context,
        throttleCounts,
    );

    if (!response.ok) {
        await handleGraphQLError(response, 'GraphQL mutation failed');
    }

    const result = await response.json();

    if (result.errors) {
        throw new Error(`GraphQL errors: ${JSON.stringify(result.errors)}`);
    }

    return result.data.updateTestStep;
}

/**
 * Attachment input for createTest mutation
 */
export interface CreateTestAttachmentInput {
    filename: string;
    mimeType: string;
    data: string;
}

/**
 * Step input for createTest mutation
 */
export interface CreateTestStepInput {
    action?: string;
    data?: string;
    result?: string;
    attachments?: CreateTestAttachmentInput[];
}

/**
 * Test type input for createTest mutation
 */
export interface CreateTestTypeInput {
    name: string;
}

/**
 * Jira fields input for createTest mutation
 */
export interface CreateTestJiraFieldsInput {
    summary?: string;
    project?: {
        key: string;
    };
}

/**
 * Jira input for createTest mutation
 */
export interface CreateTestJiraInput {
    fields?: CreateTestJiraFieldsInput;
}

/**
 * Create test result type from XRay GraphQL schema
 */
export interface CreateTestResult {
    test?: {
        issueId: string;
    };
    warnings?: string[];
}

/**
 * Creates a new test using GraphQL mutation
 * @param baseUrl - Base URL of XRay API instance
 * @param authToken - Authentication token for target XRay API instance
 * @param testType - Test type input (e.g., { name: "Generic" })
 * @param context - Script context containing environment variables
 * @param unstructured - Optional unstructured test definition
 * @param jira - Optional Jira fields input (summary, project key, etc.)
 * @param steps - Optional array of test step inputs with attachments
 * @param throttleCounts - Optional throttle counts tracker to update on rate limit errors
 * @returns Create test result with test issueId and warnings
 */
export async function createTest(
    context: Context<EV>,
    baseUrl: string,
    authToken: string,
    testType: CreateTestTypeInput,
    jira?: CreateTestJiraInput,
    steps?: CreateTestStepInput[],
    throttleCounts?: ThrottleCounts,
): Promise<CreateTestResult> {
    // Build the mutation variables object
    const variables: Record<string, { value: unknown; required: boolean; type?: string }> = {
        testType: { value: testType, required: true, type: 'UpdateTestTypeInput' },
    };

    if (jira !== undefined) {
        variables.jira = { value: jira, required: true, type: 'JSON' };
    }

    if (steps !== undefined) {
        variables.steps = { value: steps, required: false, type: '[CreateStepInput]' };
    }

    // Build GraphQL mutation using gql-query-builder
    const graphqlMutation = mutation({
        operation: 'createTest',
        variables,
        fields: [
            {
                test: ['issueId'],
            },
            'warnings',
        ],
    });

    const response = await fetchWithRetry(
        `${baseUrl}/v2/graphql`,
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                Authorization: `Bearer ${authToken}`,
            },
            body: JSON.stringify({
                query: graphqlMutation.query,
                variables: graphqlMutation.variables,
            }),
        },
        context,
        throttleCounts,
    );

    if (!response.ok) {
        await handleGraphQLError(response, 'GraphQL mutation failed');
    }

    const result = await response.json();

    if (result.errors) {
        throw new Error(`GraphQL errors: ${JSON.stringify(result.errors)}`);
    }

    return result.data.createTest;
}
TypeScriptWipeAuth

import { RecordStorage } from '@sr-connect/record-storage';

const AUTH_TOKEN_KEY = 'xray-auth-token';

// eslint-disable-next-line @typescript-eslint/no-explicit-any
export default async function (event: any, context: Context<EV>): Promise<void> {
    console.log('Wiping auth data from Record Storage...');

    const storage = new RecordStorage();

    // Check if the auth token exists before attempting to delete
    const tokenExists = await storage.valueExists(AUTH_TOKEN_KEY);

    if (tokenExists) {
        await storage.deleteValue(AUTH_TOKEN_KEY);
        console.log(`Successfully deleted auth token with key: ${AUTH_TOKEN_KEY}`);
    } else {
        console.log(`No auth token found with key: ${AUTH_TOKEN_KEY}`);
    }

    console.log('Auth data wipe completed.');
}

© 2025 ScriptRunner · Terms and Conditions · Privacy Policy · Legal Notice · Cookie Preferences