Best practices


Although Skedulo data loader is quite flexible at understanding your import data, there are certain formatting details you need to be careful with when preparing your import files. Sometimes a seemingly insignificant hiccup can lead to the entire file becoming unreadable to a machine. By following these simple best practices you can safely avoid running into that situation:

  • Use a spreadsheet program, such as Microsoft Excel or Google Sheets, to create your CSV file.

  • Verify your CSV file with a text editor.

  • Use to check the CSV file.

  • Save the CSV file in UTF-8 format.

  • Make sure you don’t have any duplicated or empty headers.

  • Make sure you don’t have any empty rows in the file or at the bottom of the file. (Empty rows at the bottom of the file are not visible in Excel, so use a text editor to view them.)

  • The first row of your file should contain the column headers, such as First Name, Last Name, or Job Description.

  • The import process is easier when the file structure resembles the object structure in Skedulo. The column headers are automatically mapped to Salesforce fields, and you do not need to manually map them.

  • Make sure all your date fields are formatted correctly.

  • Make sure your files do not exceed 10 MB.

  • For performance reasons, we recommend that you do not upload more than 500 records at a time.

  • When testing bulk imports in UAT environments, ensure that you do not use real email addresses. Email addresses used for data loading testing cannot be used again in Production environments.

  • If any errors occur, check the results files that are generated when tasks are run for help in understanding what went wrong. You can see and download these files on the tasks list page, next to each task.