Unlike system_imports.py and system_exports.py, which
deal with whole tables via COPY, this allows us to
export / import *either* whole tables or specific
columns only.
The data will be exported to text files containing
the selected columns + the primary key columns for
the table.
When importing, those tables for which a selection
of columns was exported must already be populated.
The import process will overwrite the data of the
non primary key columns it knows about. If whole
tables are exported, on the other hand, when
re-importing rows will be appended rather than
updated. It is the user's responsibility to make
sure that this will not cause any conflicts.
Script meant to be run by runner.sh.
It will not overwrite existing files. If a
sequence is modified after the first export,
the resulting file needs to be removed by the
user before a re-export will occur.
The idea is to eventually export on demand
when a new raw is added to final_lines.
When the database is recreated, the sequences
used in the events_timed and events_seq tables
will be at their initial values, which will
almost certainly conflict with existing data
when it is imported via COPY.
With this commit, we set the current value for
those sequences to something usable.
Fixes#33.
Provided that the SmartSource headers are being
saved to file, and that the path to those files
is present in the survey configuration, we now
import SmartSource information as metadata in
raw_shots.meta->'smsrc'.
Closes#19.
It exports the data that is entered directly into
Dougal as opposed to being read from an external
source.
As of this commit, not all direct data is exported.
Specifically, sequence comments (raw and final),
sequence and shot NTBA and shot NTBP statuses are not
exported.
If there is a problem with files matching the capture
globs but not matching the file name regexp patterns,
these routines will emit a message to stderr and skip
the non-matching file.
It will not delete any labels that have been removed
from the configuration, as those may be used, but it
will add new labels and modify existing ones if they
changed.
This script runs the deferred imports. It is meant to
be called from a cronjob at regular intervals – every
one or two minutes is probably a good setting.
It checks if another instance is already running before
doing its thing.
If anything goes wrong (any of the called processes exits
with non-zero condition) it will send an alert to GitLab,
provided that the authorisation key is known.
In case of errors (or anything else of note), send_alert.py
can be used to push information to a GitLab alerts endpoint.
It is generic enough that it can be used with anything else, though.
If their respective configuration keys are not
defined in a survey configuration, the import
routines will print an informational message
and exit successfully.
This is a super-simple library that does the minimum required
to get things going for the specific operations where this
code is foreseen to be used in the immediate future. It is not
and it does not aim to be a complete, generic or universal P1/11
parsing solution.