As for example, I need to load million/billions of data from Oracle/SQL Server database tables into Redshift through S3.
I have loaded all the data suppose if COPY commands failes in Redshift Bulk load Snap. In this case I need to again read the millions/billions of data from Oracle/SQL Server database tables.
By default there is no preserve with proper customize name method in Redshift BulkLoad Snap. Files not available in S3 bucket.
And also if S3 has files then I can simply do some changes in S3 itself instead of hitting database again and again. Also If we preserve the files with proper name we may use it in future if there is any data lose in Redshift end.
Hope you got few idea/scenarios which I am talking about. And this Persevering files is already exist in Google BigQuery Bulk Load (Cloud Storage)
Even today also I have faced the issue
This is COPY command error. The solution is I need to change the datatype for the column in Redshift end. I can simply change the datatype for the column. But once again I need to read the data from source.
If suppose I have file in S3, I can simply change the datatype in Redshift end and I can run COPY command with its corresponding S3 file path. But the file is not available in S3 this is a problem.
Hope you got the better idea!