I use a tab-delimited text format to store data for an application. This format is useful since the data can also be opened easily in Excel.
My datasets have around 5000 lines of data, but a varying number of columns, up to 1000. Consequently, the string variables I need to use are very large. However, when I write back to the dataset, I use TRIM(STRING) in the write statement, to stop the file size being controlled by the maximum number of columns.
What I find is that TRIM statement is consuming 15% of the total CPU time for my application (admittedly, this includes rewriting the 5000 lines for each of the 25 cases in the file, so around 125,000 writes.
Is there a faster way of working with this data?