If you can get the list of tables into documents (one document per table) - it should be pretty simple to build a (parametrized) pipeline for this, calling it for each document (each table) once.
You can also parallelize that with settings on the execute pipeline snap, assuming both source and destination support this.
Your child pipeline:
start with a SQL Select snap - which has a 2nd (optional) output view returning the table schema.
in our case, we convert the schema a little (uppercase column names, remove indexes) before ingesting it with a load snap (in our case snowflake bulk load) - which again has 2 input views - the 2nd one being for the schema), and tick the “create table” checkbox in the load snap.