Compare commits

...

1058 Commits

Author SHA1 Message Date
D. Berge
a8ff7f3b52 Fix indentation 2025-08-22 02:13:30 +02:00
D. Berge
15b62ff581 Fix typos 2025-08-22 02:13:15 +02:00
D. Berge
ade86be556 Replace tilt icons 2025-08-22 02:12:45 +02:00
D. Berge
53594416a7 Remove dead code 2025-08-22 02:04:59 +02:00
D. Berge
ff4b4a9c90 Add more view controls to group map 2025-08-22 02:04:42 +02:00
D. Berge
5842940d3b Add more view controls to map 2025-08-22 02:03:29 +02:00
D. Berge
df6f1b2d32 Add script to update comparison groups.
This should be run at regular intervals (via cron or so) to keep
the comparisons up to date.

It is not necessarily a good idea to run this as part of the
runner.sh script as it will delay other tasks trying to
update the active project every time.

Probably OK to put it on a cronjbo every 2‒24 hours. If two
copies are running concurrently that should not break anything
but it will increase the server load.
2025-08-22 00:04:46 +02:00
D. Berge
c39afc1f3c Return project timestamps 2025-08-22 00:04:21 +02:00
D. Berge
a68000eac6 Add option to return project timestamp 2025-08-22 00:02:05 +02:00
D. Berge
87aa78af00 Updated wanted db schema 2025-08-22 00:01:40 +02:00
D. Berge
3b9061aeae Add database upgrade file 44 2025-08-22 00:01:02 +02:00
D. Berge
57dae4c755 Clean up dead code 2025-08-22 00:00:01 +02:00
D. Berge
b1344bebd8 Update the required schema version.
This is necessary for the comparisons code to work.
2025-08-21 17:08:23 +02:00
D. Berge
3e91ccba8d Don't show monitor lines by default 2025-08-21 15:21:01 +02:00
D. Berge
fa0be9c0b7 Make loading indicator spin when 0% 2025-08-21 15:20:31 +02:00
D. Berge
dcbf5496f6 Remove unneded dependency 2025-08-21 15:10:45 +02:00
D. Berge
8007f46e37 Fix typo 2025-08-21 15:04:48 +02:00
D. Berge
4a7683cfd0 Add group map view 2025-08-21 14:58:53 +02:00
D. Berge
565a9d7e01 Add support for type 4 decoding 2025-08-21 14:58:53 +02:00
D. Berge
b07244c823 Fix component paths 2025-08-21 14:58:53 +02:00
D. Berge
c909edc41f Move components to subdirectory 2025-08-21 14:55:27 +02:00
D. Berge
41ef511123 Return type 4 sequence data 2025-08-21 14:52:50 +02:00
D. Berge
4196e9760b Add encoding type 4 to bundle 2025-08-21 14:51:49 +02:00
D. Berge
6b6f5ab511 Link from group summary to individual projects 2025-08-20 12:06:20 +02:00
D. Berge
7d8c78648d Don't request summaries in ProjectList.
Those will be populated directly by Vuex.
2025-08-20 12:05:44 +02:00
D. Berge
faf7e9c98f Try to improve responsiveness when refreshing project list 2025-08-20 12:05:05 +02:00
D. Berge
abf2709705 Expand groups router definition 2025-08-20 12:04:26 +02:00
D. Berge
f5dfafd85a Make event handler more specific 2025-08-20 12:03:53 +02:00
D. Berge
cf8b0937d9 Rework comparison components.
More focused on error ellipses.
2025-08-19 19:28:19 +02:00
D. Berge
d737f5d676 Refresh comparisons when notified of changes 2025-08-19 19:27:38 +02:00
D. Berge
5fe19da586 Add control to reset comparisons view 2025-08-19 19:27:03 +02:00
D. Berge
0af0cf4b42 Add overlays when loading / data error 2025-08-19 18:58:04 +02:00
D. Berge
ccb8205d26 Don't cache comparisons in the API 2025-08-19 18:55:31 +02:00
D. Berge
9b3fffdcfc Don't save comparison samples 2025-08-19 18:54:28 +02:00
D. Berge
dea1e9ee0d Add comparisons channel to notifications 2025-08-19 18:53:40 +02:00
D. Berge
d45ec767ec Add database upgrade file 43 2025-08-19 17:56:30 +02:00
D. Berge
67520ffc48 Add database upgrade file 42 2025-08-19 17:56:14 +02:00
D. Berge
22a296ba26 Add database upgrade file 41 2025-08-19 17:55:58 +02:00
D. Berge
f89435d80f Don't overwrite existing comparisons unless forced.
opts.overwrite = true will cause existing comparisons to be
recomputed.
2025-08-19 17:20:57 +02:00
D. Berge
a3f1dd490c Fix non-existent method 2025-08-19 17:20:03 +02:00
D. Berge
2fcfcb4f84 Add link to group comparison from project list 2025-08-18 16:39:20 +02:00
D. Berge
b60db7e7ef Add frontend route for 4D comparisons 2025-08-18 14:17:17 +02:00
D. Berge
4bb087fff7 Add 4D comparisons list Vue component 2025-08-18 14:16:23 +02:00
D. Berge
15af5effc3 Add 4D comparisons Vue component 2025-08-18 14:15:52 +02:00
D. Berge
b5c6d04e62 Add utilities for transforming duration objects 2025-08-18 14:15:14 +02:00
D. Berge
571c5a8bca Add Vue components for 4D comparisons 2025-08-18 14:14:34 +02:00
D. Berge
c45982829c Add set operations utilities 2025-08-18 14:11:56 +02:00
D. Berge
f3958b37b7 Add comparison API endpoints 2025-08-18 14:11:20 +02:00
D. Berge
58374adc68 Add two new bundle types.
Of which 0xa is not actually used and 0xc is used for geometric
comparison data ([ line, point, δi, δj ]).
2025-08-18 14:05:26 +02:00
D. Berge
32aea8a5ed Add comparison functions to server/lib 2025-08-18 13:53:43 +02:00
D. Berge
023b65285f Fix bug trying to get project info for undefined 2025-08-18 13:51:37 +02:00
D. Berge
a320962669 Add project group info to Vuex 2025-08-18 13:50:49 +02:00
D. Berge
0c0067b8d9 Add iterators 2025-08-18 13:48:49 +02:00
D. Berge
ef8466992c Add automatic event icon to log.
So that the user can visually see which events were created by
Dougal (not including QC events).
2025-08-18 11:22:58 +02:00
D. Berge
8e4e70cbdc Add server status info to help dialogue 2025-08-17 13:19:51 +02:00
D. Berge
4dadffbbe7 Refactor Selenium to make it more robust.
It should stop runaway Firefox processes.
2025-08-17 13:18:04 +02:00
D. Berge
24dcebd0d9 Remove logging statements 2025-08-17 13:17:22 +02:00
D. Berge
12a762f44f Fix typo in @dougal/binary 2025-08-16 14:55:53 +02:00
D. Berge
ebf13abc28 Merge branch '337-fix-event-queue' into 'devel'
Resolve "Automatic event detection fault: soft start on every shot during line"

Closes #337

See merge request wgp/dougal/software!61
2025-08-16 12:55:15 +00:00
D. Berge
b3552db02f Add error checking to ETag logic 2025-08-16 11:36:43 +02:00
D. Berge
cd882c0611 Add debug info to soft start detection 2025-08-16 11:36:43 +02:00
D. Berge
6fc9c020a4 Fix off-by-one error in LGSP detection 2025-08-16 11:36:43 +02:00
D. Berge
75284322f1 Modify full volume detection on Smartsource
The Smartsource firmware seems to have changed rendering the old
test invalid.
2025-08-16 11:36:43 +02:00
D. Berge
e849c47f01 Remove old queue implementation 2025-08-16 11:36:43 +02:00
D. Berge
387d20a4f0 Rewrite automatic event handling system 2025-08-16 11:36:43 +02:00
D. Berge
2fab06d340 Don't send timestamp when patching seq+point events.
Closes #339.
2025-08-16 11:35:35 +02:00
D. Berge
7d2fb5558a Hide switches to enable additional graphs.
All violin plots as well as position scatter plots and histograms
are shown by default. This is due to #338.

For some reason, having them enabled from the get go does not
cause any problems.
2025-08-15 18:09:51 +02:00
D. Berge
764e2cfb23 Rename endpoint 2025-08-14 13:34:36 +02:00
D. Berge
bf1af1f76c Make it explicit that :id is numeric 2025-08-14 13:34:27 +02:00
D. Berge
09e4cd2467 Add CSV event import.
Closes #336
2025-08-14 13:33:30 +02:00
D. Berge
2009d73a2b Fix action registration and unregistration 2025-08-13 17:03:00 +02:00
D. Berge
083ee812de Use cookies for authentication as a last resort.
Fixes #335
2025-08-13 16:54:38 +02:00
D. Berge
84510e8dc9 Add proper logging 2025-08-13 15:42:49 +02:00
D. Berge
7205ec42a8 Fix handler registration.
The way it was being done meant that unregisterHandlers would not
have worked.
2025-08-13 15:42:49 +02:00
D. Berge
73d85ef81f Fix scheduling of token refresh via websocket 2025-08-13 12:58:36 +02:00
D. Berge
6c4dc35461 Fix bad status on preplot lines tab
If there were no raw / final sequences on a line, planned sequences
would not show either.
2025-08-13 12:45:50 +02:00
D. Berge
a5ebff077d Fix authentication middleware erroring on IPv6 2025-08-13 11:50:20 +02:00
D. Berge
2a894692ce Throttle snack notifications 2025-08-12 00:22:09 +02:00
D. Berge
25690eeb52 Fix showSnack in main.js 2025-08-11 23:48:08 +02:00
D. Berge
3f9776b61d Let the user know when we're getting gateway errors 2025-08-11 23:47:25 +02:00
D. Berge
8c81daefc0 Move the two /configuration endpoints next to each other 2025-08-11 22:20:46 +02:00
D. Berge
c173610e87 Simplify middleware 2025-08-11 22:19:51 +02:00
D. Berge
301e5c0731 Set headers only on 304 2025-08-11 22:06:51 +02:00
D. Berge
48d9f45fe0 Clean up debug messages 2025-08-11 22:06:20 +02:00
D. Berge
cd23a78592 Merge branch '190-refactor-map' into 'devel'
Resolve "Refactor map"

Closes #190, #322, #323, #324, #325, #326, and #321

See merge request wgp/dougal/software!25
2025-08-11 13:01:00 +00:00
D. Berge
e368183bf0 Show release notes for previous versions too 2025-08-11 14:59:22 +02:00
D. Berge
02477b071b Compress across the board.
It's still subject to the compression module's filters, but now
we try to compress every response in principle.
2025-08-11 13:57:11 +02:00
D. Berge
6651868ea7 Enable compression for vessel track responses 2025-08-11 13:40:53 +02:00
D. Berge
c0b52a8245 Be more aggressive about what gets compressed 2025-08-11 12:42:48 +02:00
D. Berge
90ce6f063e Remove dead code 2025-08-11 02:31:43 +02:00
D. Berge
b2fa0c3d40 Flatten vesselTrackConfig for better reactivity 2025-08-11 02:31:12 +02:00
D. Berge
83ecaad4fa Change vessel colour 2025-08-11 01:57:40 +02:00
D. Berge
1c5fd2e34d Calculate properly first / last timestamps of vessel tracks 2025-08-11 01:56:46 +02:00
D. Berge
aabcc74891 Add compression to some endpoints.
Consideration will be given to adding (conditional) compression
to all endpoints.
2025-08-11 01:53:50 +02:00
D. Berge
2a7b51b995 Squash another cookie 2025-08-11 01:52:04 +02:00
D. Berge
5d19ca7ca7 Add authentication to vessel track request 2025-08-10 22:03:25 +02:00
D. Berge
910195fc0f Comment out "Map settings" control on map.
Not sure it will actually be used, after all.
2025-08-10 21:53:55 +02:00
D. Berge
6e5570aa7c Add missing require 2025-08-10 21:53:04 +02:00
D. Berge
595c20f504 Add vessel position to map.
Updates via websocket using the `realtime` channel notification
message.
2025-08-10 21:52:02 +02:00
D. Berge
40d0038d80 Add vessel track layer to map.
Track length may be changed by clicking on the appropriate icon.
2025-08-10 21:47:43 +02:00
D. Berge
acdf118a67 Add new /vessel/track endpoints.
This is a variation on /navdata but returns data more suitable
for plotting vessel tracks on the map.
2025-08-10 21:39:35 +02:00
D. Berge
b9e0975d3d Add clone routine to project DB lib (WIP).
This relates to #333.
2025-08-10 21:37:12 +02:00
D. Berge
39d9c9d748 Fix GeoJSON returned by /navdata endpoint 2025-08-10 21:36:37 +02:00
D. Berge
b8b25dcd62 Update IP getter script to return LAN address.
get-ip.sh internet: returns the first IP address found that has
internet access.

get-ip.sh local (or no argument): returns the list of non-loopback
IPs minus the one that has internet access.

This means that update-dns.sh now sends the first IP address that
does *not* have internet access.
2025-08-09 22:27:23 +02:00
D. Berge
db97382758 Add scripts to automatically update the LAN DNS records.
./sbin/update-dns.sh may be called at regular intervals (one hour
or so) via crontab.

It will automatically detect:
- its local host name (*.lan.dougal.aaltronav.eu); and
- which IP has internet access, if any.

Armed with that information and with the dynamic DNS API password
stored in DYNDNS_PASSWD in ~/.dougalrc, it will update the relevant
DNS record.

For this to work, the first `lan.dougal` hostname in the Nginx
configuration must be the one that is set up for dynamic update.
Other `lan.dougal` hostnames should be CNAME records pointing to
the first one.
2025-08-09 18:37:15 +02:00
D. Berge
ae8e5d4ef6 Do not use cookies for backend authentication 2025-08-09 12:43:17 +02:00
D. Berge
2c1a24e4a5 Do not store JWT in document.cookie 2025-08-09 12:14:17 +02:00
D. Berge
0b83187372 Provide authorisation details to Deck.gl layers.
Those layers that call API endpoints directly no longer need to
rely on cookies as they use the JWT token directly via the
`Authorization` header.
2025-08-09 12:12:24 +02:00
D. Berge
3dd51c82ea Adapt map links to new format 2025-08-08 18:54:25 +02:00
D. Berge
17e6564e70 Implement map crosshairs.
These are coordinates that are supplied in the fragment part of the
URL. When available, a marker is shown at the given positions.
Labels may also be given and are also shown.
2025-08-08 18:51:54 +02:00
D. Berge
3a769e7fd0 Adapt to new map implementation.
Note: if we implement a fallback to the old Leaflet code, the new
hash format will need to be accepted in Leaflet too.
2025-08-08 16:10:17 +02:00
D. Berge
7dde0a15c6 Fix handling of view state and layers in URL hash 2025-08-08 16:09:32 +02:00
D. Berge
2872af8d60 Refresh sequence line data on every render 2025-08-08 13:48:49 +02:00
D. Berge
4e581d5664 Add final-raw heatmap 2025-08-08 13:47:30 +02:00
D. Berge
a188e9a099 Tweak colour scales 2025-08-08 13:45:54 +02:00
D. Berge
cd6ad92d5c Use the same names in the user interface as in the code 2025-08-08 13:44:42 +02:00
D. Berge
08dfe7ef0a Add notification handlers to Map.
They reload any sequence data on notification of changes.
2025-08-08 12:45:15 +02:00
D. Berge
6a5238496e Add possibility to refresh points map while loading binary data 2025-08-08 12:44:21 +02:00
D. Berge
bc237cb685 Add final data points layer to map 2025-08-08 12:43:27 +02:00
D. Berge
4957142fb1 Refactor sequenceBinaryData.
It is no longer a computed property but actual data. It gets
recalculated on demand via getSequenceBinaryData().
2025-08-08 12:42:38 +02:00
D. Berge
5a19c81ed1 Unregister notification handlers.
When leaving the Project component, all its notification handlers
will be unregistered, otherwise we end up with a memory leak.
2025-08-08 12:22:56 +02:00
D. Berge
b583dc6c02 Support unregistering notification handlers 2025-08-08 12:20:58 +02:00
D. Berge
134e3bce4e Add client-side support for type 3 bundles (final data) 2025-08-08 12:20:04 +02:00
D. Berge
f5ad9d7182 Use sequenceBinaryData for raw data points layer.
Saves us from ending up with an extra copy of the data.
2025-08-08 12:18:07 +02:00
D. Berge
07874ffe0b Filter missing values out of JSON data for binary packing 2025-08-08 12:15:39 +02:00
D. Berge
695add5da6 Increase the resolution of position errors in bundle.
Note: this does not actually matter as of this commit as we are
storing those values as Float32 but it will become relevant when
we start packing them as Int16.
2025-08-08 12:15:05 +02:00
D. Berge
6a94287cba Add type 3 binary bundle.
Consisting of final positions + errors.
2025-08-08 11:24:16 +02:00
D. Berge
c2ec2970f0 Remove dead code 2025-08-08 11:20:03 +02:00
D. Berge
95d6d0054b Adapt GIS endpoint to new preplots tables structure 2025-08-07 22:02:04 +02:00
D. Berge
5070be5ff3 Handle event changes 2025-08-07 20:18:18 +02:00
D. Berge
d5e77bc946 Move API action option to the correct argument 2025-08-07 19:20:27 +02:00
D. Berge
f6faad17db Fix Python's idiotic syntax 2025-08-07 17:17:43 +02:00
D. Berge
94cdf83b13 Change access permissions to files endpoints 2025-08-07 16:23:55 +02:00
D. Berge
6a788ae28b Add logging statements 2025-08-07 16:23:14 +02:00
D. Berge
544117eec3 Fix retrieval of preplot previews 2025-08-07 16:20:00 +02:00
D. Berge
e5679ec14b Move API action option to the correct argument 2025-08-07 16:19:13 +02:00
D. Berge
a1c174994c Remove debugging statements 2025-08-07 13:03:43 +02:00
D. Berge
2db8cc3116 Tweak wording 2025-08-07 12:38:04 +02:00
D. Berge
99b1a841c5 Let the user know when using a remote frontend.
Note: this relies on the gateway Nginx server configurations
including an X-Dougal-Server header, as follows:

add_header X-Dougal-Server "remote-frontend" always;
2025-08-07 12:30:28 +02:00
D. Berge
6629e25644 Do not error if version history is undefined 2025-08-07 11:03:07 +02:00
D. Berge
7f5f64acb1 Check for lineNameInfo when importing P1/11 2025-08-07 11:00:42 +02:00
D. Berge
8f87df1e2f Comment out debug output 2025-08-07 10:52:13 +02:00
D. Berge
8399782409 Set response auth headers conditionally 2025-08-07 10:42:37 +02:00
D. Berge
9c86018653 Auto-refresh materialised view if necessary 2025-08-07 10:42:08 +02:00
D. Berge
a15c97078b Fix typo in access middleware 2025-08-07 10:41:29 +02:00
D. Berge
d769ec48dd Request fresh responses when refreshing data from the server 2025-08-07 10:40:23 +02:00
D. Berge
fe421f545c Add data integrity check 2025-08-06 22:54:01 +02:00
D. Berge
caa8fec8cc Log warning 2025-08-06 22:52:06 +02:00
D. Berge
49fc260ace Clear cookie on logout 2025-08-06 22:51:44 +02:00
D. Berge
b7038f542c Fix storage of JWT in localStorage 2025-08-06 22:51:20 +02:00
D. Berge
40ad0e7650 Fix database upgrades 38, 39, 40.
Ensure the changes are applied to the public schema.
2025-08-06 22:50:20 +02:00
D. Berge
9006deb8be Change error notifications 2025-08-06 12:01:03 +02:00
D. Berge
6e19b8e18f Do not fail if old / new missing from notifications.
The server will actually remove those when the notification
would exceed a certain size, so it's expected that those might
be null.
2025-08-06 11:59:52 +02:00
D. Berge
3d474ad8f8 Update package-lock.json 2025-08-06 11:31:51 +02:00
D. Berge
821af18f29 Removed planned line points layer control.
Not necessary as we already have the preplots layer
2025-08-06 11:25:44 +02:00
D. Berge
9cf15ce9dd Edit code comments 2025-08-06 11:24:39 +02:00
D. Berge
78838cbc41 Implement planned lines layer 2025-08-06 11:20:40 +02:00
D. Berge
8855da743b Handle refresh on data change for some layers.
Binary layers not included yet.
2025-08-06 11:17:37 +02:00
D. Berge
c67a60a7e6 Fix labels handling in events map layer 2025-08-06 11:14:20 +02:00
D. Berge
81e06930f0 Silence console error 2025-08-06 11:05:15 +02:00
D. Berge
0263eab6d1 Add extra mutations to plan Vuex module.
They're not actually needed though. 🙄
2025-08-06 11:03:11 +02:00
D. Berge
931219850e Fix wrong freezing of Vuex data.
It's the sequence items themselves that benefit from freezing,
not the sequence array itself.
2025-08-06 11:01:57 +02:00
D. Berge
12369d5419 Support Markdown-formatted snack messages 2025-08-06 11:01:10 +02:00
D. Berge
447003c3b5 Implement pub-sub handler system for ws notifications. 2025-08-06 10:59:17 +02:00
D. Berge
be7157b62c Downgrade gracefully if window.caches is not available.
This should not happen in production, as the Cache API is
widely implemented as of the date of this commit, but it
will not be available if the user is not in a secure
context. That should only happen during testing.
2025-08-06 10:45:05 +02:00
D. Berge
8ef56f9946 Pass a clone of Response to API callback 2025-08-06 10:42:34 +02:00
D. Berge
f2df16fe55 Fix getting project configuration data 2025-08-06 10:41:42 +02:00
D. Berge
96db6b1376 Add a more helpful message if cause of failure is known 2025-08-06 10:41:08 +02:00
D. Berge
36d86c176a Only send websocket notifications to authenticated users 2025-08-06 10:40:16 +02:00
D. Berge
9c38af4bc0 Improve handling of JWT over websocket.
When a valid `token` message is received from a client, the
socket server will automatically push refreshed tokens at
about half lifetime of the received JWT.

If an invalid token is received the connection is closed.

See #304.
2025-08-06 10:26:53 +02:00
D. Berge
be5c6f1fa3 Fix user authentication.
* Use X-JWT header for sending authentication info
  both from server to client and from client to server.
* Send token in body of login response.
* Also use Set-Cookie: JWT=… so that calls that are
  not issued directly by Dougal (e.g. Deck.gl layers
  with a URL `data` property) work without having to
  jump through hoops.

Closes #321
2025-08-06 10:21:37 +02:00
D. Berge
17b9d60715 Make sourceLayer optional in getPickingInfo 2025-08-04 18:47:15 +02:00
D. Berge
e2dd563054 Save changed to package-lock.json 2025-08-03 13:50:59 +02:00
D. Berge
67dcc2922b Fix binary bundling of delta spread 2025-08-03 13:49:55 +02:00
D. Berge
11e84f47eb Fix refresh to remove only data for current project 2025-08-03 13:48:51 +02:00
D. Berge
1066a03b25 Leave layer menu open when still focused 2025-08-03 13:48:04 +02:00
D. Berge
08440e3e21 Add tooltip to heatmap control 2025-08-03 13:47:48 +02:00
D. Berge
d46eb3b455 Add gun misfire options to menu 2025-08-03 13:47:07 +02:00
D. Berge
864b430320 Fix no fire / autofire values (they're not boolean!) 2025-08-03 13:45:56 +02:00
D. Berge
61cbefd0e9 Tweak heatmap parameters 2025-08-03 13:45:31 +02:00
D. Berge
29c484affa Add misfire options to heatmap 2025-08-03 13:45:07 +02:00
D. Berge
0806b80445 Remove dead code 2025-08-03 13:43:53 +02:00
D. Berge
b5a3a22892 Add full screen control to map 2025-08-03 11:57:59 +02:00
D. Berge
c13aa23e2f Add heatmaps to map (various data facets) 2025-08-03 11:57:12 +02:00
D. Berge
3366377ab0 Use preplot point layers on map 2025-08-03 11:56:05 +02:00
D. Berge
59a90e352c Add tooltips for preplot layers 2025-08-03 11:53:55 +02:00
D. Berge
0f207f8c2d Add heatmap layer 2025-08-03 11:53:24 +02:00
D. Berge
c97eaa64f5 Add preplot point layers (sail / source line) 2025-08-03 11:52:48 +02:00
D. Berge
5b82f8540d Use DougalBinaryLoader for sequence points layers 2025-08-03 11:51:47 +02:00
D. Berge
d977d9c40b Add support for udv values 0 and 1 to DougalSequenceLayer.
udv = 0 → sail line points
udv = 1 → source line points
2025-08-03 11:44:42 +02:00
D. Berge
d16fb41f24 Add DougalBinaryLoader Deck.gl loader 2025-08-03 11:39:03 +02:00
D. Berge
c376896ea6 Also serve preplot source/sail points as binary.
This commit adds the ability to pack preplot points in Dougal
binary format. Sail line points take udv=0 and source line points
take udv=1 – udv=2 remains sequence data.

Endpoints for retrieving the data in JSON, GeoJSON and binary
formats have also been added. Data may be retrieved as a single
line or for a whole project.
2025-08-03 11:17:31 +02:00
D. Berge
2bcdee03d5 Further refactor Map component.
Map.sequencesBinaryData is now a single object instead of an
array of objects.

DougalSequenceLayer has been greatly simplified. It now
inherits from ScatterplotLayer rather than CompositeLayer.

DougalEventsLayer added. It shows either a ScatteplotLayer
or a ColumnsLayer depending on zoom level.
2025-08-02 16:00:54 +02:00
D. Berge
44113c89c0 Further refactor Map component.
Layer and tooltip definitions have been split out into different
files as mixins.

Uses Dougal binary bundles.
2025-08-01 17:18:16 +02:00
D. Berge
17c6d9d1e5 Add DougalSequenceLayer 2025-08-01 17:16:36 +02:00
D. Berge
06cc16721f Remove SequenceDataLayer 2025-08-01 17:15:27 +02:00
D. Berge
af7485370c Limit number of simultaneous requests to the API 2025-08-01 17:11:34 +02:00
D. Berge
ad013ea642 Add additional formats for sequence list endpoint.
The original and default "Accept: application/json" will return
a sequence summary.

"Accept: application/geo+json" will return a GeoJSON of the
entire project.

"Accept: application/vnd.aaltronav.dougal+octet-stream" will
return the entire project in Dougal's binary format.
2025-08-01 17:07:37 +02:00
D. Berge
48d5986415 Change handling of sequence parameter.
Allow `null` to be used in addition to `0` in
db.sequence.get() to return all sequences.
2025-08-01 17:05:38 +02:00
D. Berge
471f4e8e64 Add synonyms to db.sequence.get() options 2025-08-01 17:05:05 +02:00
D. Berge
4be99370e6 Change the MIME type of binary responses 2025-08-01 16:50:32 +02:00
D. Berge
e464f5f887 Refactor code handling binary sequence requests.
Instead of the user giving the recipe for the payload, it now
only handles predefined payload configurations. Those are
denoted by the `type` query parameter. The only valid value
as of this commit is `type=2`.

Look at lib/binary/bundle.js for the definition of a type 2
bundle.
2025-08-01 16:47:50 +02:00
D. Berge
cc8d790ad8 Remove dead code (replaced by @dougal/binary) 2025-08-01 16:43:22 +02:00
D. Berge
32c6e2c79f Add @dougal/concurrency module 2025-08-01 11:22:30 +02:00
D. Berge
ba7221ae10 Implement getData*() functions in DougalBinaryBundle 2025-07-30 17:41:17 +02:00
D. Berge
1cb9d4b1e2 Add @dougal/binary module.
It encodes / decodes sequence / preplot data using an efficient
binary format for sending large amounts of data across the wire
and for (relatively) memory efficient client-side use.
2025-07-30 17:37:00 +02:00
D. Berge
2a0025cdbf Try to fix FSP / LSP times for the third time 2025-07-29 13:31:17 +02:00
D. Berge
f768f31b62 Aesthetic changes to map layers control 2025-07-28 12:09:02 +02:00
D. Berge
9f91b1317f Add map settings control (mock up).
This is not yet implemented but left visible for demo purposes.

Intended to configure things such as vessel track length, possibly
whether the latest track or the track within the current prospect
is shown, etc.
2025-07-28 12:06:56 +02:00
D. Berge
3b69a15703 Add manual refresh control to map.
It may or may not be permanenet, once tasks #322, #323, #324, #325
are implemented.

Closes #326
2025-07-28 12:05:10 +02:00
D. Berge
cd3bd8ab79 Fix FSP/LSP times (again) 2025-07-28 12:04:27 +02:00
D. Berge
df193a99cd Add sleep() method to main.js.
Useful when the UI needs to "pause" for UX reasons. Can be called
from any component with `this.$root.sleep(ms)`.
2025-07-28 12:02:49 +02:00
D. Berge
580e94a591 Await on binary data download requests 2025-07-28 11:09:55 +02:00
D. Berge
3413641c10 Fix first and last shotpoint times in map tooltip 2025-07-28 11:01:38 +02:00
D. Berge
f092aff015 Fix navdata URL 2025-07-28 11:01:08 +02:00
D. Berge
94c6406ea2 Add missing dependency 2025-07-28 10:37:58 +02:00
D. Berge
244d84a3bd Add more layers to Map component.
This commits adds back the vessel track as well as other layers,
gives the option to load both point and line versions of the plan,
raw, and final sequences, and adds heatmaps showing positioning
error of raw and final data relative to preplots.

The implementation in this commit relies on translating the binary
sequence data into JSON (for the heatmaps) which is inefficient
but adequate as an initial proof of concept.
2025-07-28 10:14:41 +02:00
D. Berge
89c565a0f5 Protect against out of bounds array condition 2025-07-28 10:10:05 +02:00
D. Berge
31ac8d3c01 Add toJSON() function to binary decoder 2025-07-28 10:07:49 +02:00
D. Berge
3bb78040b0 Set correct Content-Type 2025-07-28 10:06:21 +02:00
D. Berge
1433bda14e Make the iterator more robust against failures.
If a sequence fails to be fetched, it will keep iterating rather
than throwing an error or returning invalid data.
2025-07-27 11:16:47 +02:00
D. Berge
c0ae033de8 Use Cache API to cache binary sequence data.
This speeds up loading when the user moves away from and then
revisits the map tab.

NOTE: As of this commit, there is no way to refresh or invalidate
the cache.
2025-07-27 11:15:09 +02:00
D. Berge
05eed7ef26 Comment out Norwegian nautical charts layer.
It has apparently become inaccessible in recent times.
2025-07-27 11:07:08 +02:00
D. Berge
5d2ca513a6 Add check for WebGL support.
The intention is to fall back to the legacy map if WebGL is not
supported on a client.
2025-07-27 11:06:12 +02:00
D. Berge
b9c8069828 Add an error overlay.
Assinging to `error` on the Map componenent will cause an overlay
with an error <v-alert/> to be shown.
2025-07-27 11:03:26 +02:00
D. Berge
b80b8ffb52 Add cache option to api Vuex action.
It allows the caching and retrieval of requests using Cache API.
2025-07-27 11:01:34 +02:00
D. Berge
c2eb82ffe7 Modify view on map link 2025-07-26 19:14:29 +02:00
D. Berge
e517e2f771 Refactor map component.
Uses Deck.gl rather than Leaflet.
2025-07-26 19:13:58 +02:00
D. Berge
0afd54447f Add SequenceDataLayer Deck.gl class.
It takes the typed arrays returned by the binary-encoded
endpoints.
2025-07-26 19:06:56 +02:00
D. Berge
e6004dd62f Add link to binary library.
Same library is used server and client side.
2025-07-26 19:06:56 +02:00
D. Berge
f623954399 Adapt to new calling convention for Vuex action 2025-07-26 19:06:56 +02:00
D. Berge
f8d882da5d Replace text parameter by format in Vuex API call.
Instead of { text: true } as a Fetch option, one can
now specify { format: "text" }, as well as any of these
other options, which call the corresponding Fetch method:

* "arrayBuffer",
* "blob",
* "formData",
* "json",
* "text"
2025-07-26 19:06:56 +02:00
D. Berge
808c9987af Add binary format middleware for sequence data.
It responds to the MIME type:
application/dougal-map-sequence+octet-stream
2025-07-26 19:05:00 +02:00
D. Berge
4db6d8dd7a Add custom binary format packing / unpacking.
This series of custom binary messages are an alternative to JSON /
GeoJSON when huge amounts of data needs to be transferred to and
processed by the client, such as a GPU-based map view showing all
the points for a prospect, or QC graphs, etc.
2025-07-26 19:05:00 +02:00
D. Berge
9a47977f5f Improve help dialogue.
- Shows frontend and backend versions
- Shows version release notes
2025-07-26 10:59:40 +02:00
D. Berge
a58cce8565 Add /version/history endpoint to API.
Retrieves Git tag annotations.
2025-07-26 10:58:42 +02:00
D. Berge
5487a3a49b Catch JWT expiration.
Closes #321
2025-07-26 10:56:23 +02:00
D. Berge
731778206c Show front and backend version on help dialogue 2025-07-25 23:15:07 +02:00
D. Berge
08e65b512d Inject frontend version as environment variable 2025-07-25 23:14:30 +02:00
D. Berge
9b05388113 Add database upgrade file 40 2025-07-25 21:17:20 +02:00
D. Berge
1b44389a1a Allow configuring the API URL via environment variable.
The environment variable DOUGAL_API_URL takes precedence
over the hard-coded value. For instance:

DOUGAL_API_URL=http://127.0.0.1:2999 will cause /api to
be proxied to the above URL (websockets are correctly
handled too) instead of the default.
2025-07-25 20:08:38 +02:00
D. Berge
0b3711b759 Fix typo 2025-07-25 20:08:08 +02:00
D. Berge
5a523d4941 Make projects table sorted by default 2025-07-25 20:07:40 +02:00
D. Berge
122951e3a2 Fix expected DB version for upgrade 38 2025-07-25 18:11:19 +02:00
D. Berge
90216c12e4 Rename database upgrades 2025-07-25 18:08:47 +02:00
D. Berge
9c26909a59 Fix npm run scripts 2025-07-25 17:54:56 +02:00
D. Berge
0427a3c18c Use Node workspaces to manage repo dependencies 2025-07-25 17:48:30 +02:00
D. Berge
c32e6f2b38 Make map API calls silent.
Otherwise we get spurious 404s and such.
2025-07-25 17:17:36 +02:00
D. Berge
546d199c52 Remove annooying Leaflet attribution control 2025-07-25 17:17:36 +02:00
D. Berge
6562de97b9 Make the CSS import from package not relative 2025-07-25 17:17:36 +02:00
D. Berge
c666a6368e Fix copy/paste logic for lineNameInfo widget 2025-07-25 14:41:21 +02:00
D. Berge
d5af6df052 Merge branch '177-refactor-users-code' into 'devel'
Refactor users code

Closes #177 and #176

See merge request wgp/dougal/software!57
2025-07-25 12:26:39 +00:00
D. Berge
0c5ea7f30a Merge branch '178-add-api-endpoints-for-user-management' into '177-refactor-users-code'
Add API endpoints for user management

See merge request wgp/dougal/software!58
2025-07-25 12:25:45 +00:00
D. Berge
302642f88d Fix JWT renewal over websocket 2025-07-25 14:21:26 +02:00
D. Berge
48e1369088 Fix host based authentication 2025-07-25 14:03:43 +02:00
D. Berge
daa700e7dc Add (temporarily disabled) menu option for vessel config.
The idea is to have a frontend access to a screen where duly
authorised users can modify vessel-wide configuration parameters.
2025-07-25 14:01:49 +02:00
D. Berge
8db2c8ce25 Use access rights mixin in Equipment view 2025-07-25 13:36:16 +02:00
D. Berge
890e48e078 Revert "Don't refresh projects if no user is logged in."
This reverts commit 3a0f720f2f.
2025-07-25 13:35:35 +02:00
D. Berge
11829555cf Add <v-tooltip/> showing permissions.
Hovering over the user avatar or a project name in the breadcrumbs
shows a tooltip with the relevant permissions.
2025-07-25 13:33:59 +02:00
D. Berge
07d8e97f74 Fix Markdown functions in root component 2025-07-25 13:32:30 +02:00
D. Berge
fc379aba14 Silence errors when refreshing projects.
We use this endpoint also to do autologins, so HTTP 403's are not
unexpected.
2025-07-25 13:31:28 +02:00
D. Berge
8cbacb9aa7 Allow silencing API request errors.
The {silent: true} option in the new `opts` argument to the
`api` action does the trick.
2025-07-25 13:30:26 +02:00
D. Berge
acb59035e4 Add missing file 2025-07-25 13:29:39 +02:00
D. Berge
b7d0ee7da7 Remove dead code from the frontend 2025-07-25 11:02:24 +02:00
D. Berge
3a0f720f2f Don't refresh projects if no user is logged in.
Avoids a 403.
2025-07-25 10:43:08 +02:00
D. Berge
6cf6fe29f4 Improve presentation of organisation component in project settings 2025-07-24 23:04:44 +02:00
D. Berge
6f0f2dadcc Add "actions" slot to DougalOrganisations component 2025-07-24 23:04:15 +02:00
D. Berge
64fba1adc3 Add project permissions tooltip to breadcrumbs 2025-07-24 23:03:41 +02:00
D. Berge
3ea82cb660 Fix reading of credentials for issuing JWT 2025-07-24 23:03:05 +02:00
D. Berge
84c1385f88 Refactor class User (clean up) 2025-07-24 23:02:30 +02:00
D. Berge
b1b7332216 Add access mixin to Project and use in child component 2025-07-24 20:43:22 +02:00
D. Berge
8e7451e17a Adapt the access rights mixin to new user management code 2025-07-24 20:42:25 +02:00
D. Berge
bdeb2b8742 Show organisation membership in user avatar 2025-07-24 20:41:07 +02:00
D. Berge
ccfabf84f7 Add user management page to frontend 2025-07-24 20:40:18 +02:00
D. Berge
5d4e219403 Refactor Vuex store to adapt to new User class 2025-07-24 20:38:51 +02:00
D. Berge
3b7e4c9f0b Add client-side User class derived from @dougal/user.
Adds methods to communicate with the backend.
2025-07-24 20:37:50 +02:00
D. Berge
683f5680b1 Add organisations configuration section to project settings UI 2025-07-24 20:36:45 +02:00
D. Berge
ce901a03a1 Add component for editing users 2025-07-24 20:35:46 +02:00
D. Berge
f8e5b74c1a Add components for editing organisations settings 2025-07-24 20:35:17 +02:00
D. Berge
ec41d26a7a Use @dougal/user, @dougal/organisations modules in frontend 2025-07-24 20:32:25 +02:00
D. Berge
386fd59900 Update API to handle permissions checks on most endpoints 2025-07-24 19:24:40 +02:00
D. Berge
e47020a21e Add /user endpoints to API 2025-07-24 19:23:43 +02:00
D. Berge
b8f58ac67c Add FIXME 2025-07-24 19:20:58 +02:00
D. Berge
b3e27ed1b9 Refactor auth.authentify.
We now get the user's details directly from the JWT token.
2025-07-24 19:15:36 +02:00
D. Berge
f5441d186f Refactor auth.access middleware.
It users @dougal/user and @dougal/organisations classes.
2025-07-24 19:14:19 +02:00
D. Berge
d58bc4d62e Remove unused code 2025-07-24 19:13:17 +02:00
D. Berge
01d1691def Fix login endpoint (checkValidCredentials is now async) 2025-07-24 19:09:39 +02:00
D. Berge
bc444fc066 Add dependency to project organisations cache 2025-07-24 18:48:22 +02:00
D. Berge
989ec84852 Refactor JWT credentials check to use class User 2025-07-24 18:36:34 +02:00
D. Berge
065f6617af Add class ServerUser derived from User.
Used on the backend. Adds methods to hash and check passwords and
to read from and save user data to the database.
2025-07-24 18:31:51 +02:00
D. Berge
825530c1fe Use @dougal/user, @dougal/organisations modules in backend 2025-07-24 18:27:59 +02:00
D. Berge
1ef8eb871f Add @dougal/user NodeJS module.
Abstracts the concept of User in the new permissions model.
2025-07-24 18:22:44 +02:00
D. Berge
2e9c603ab8 Add @dougal/organisations NodeJS module.
Abstracts the concept of Organisations in the new permissions model.
2025-07-24 18:21:02 +02:00
D. Berge
7f067ff760 Add contextual info about sailline CSV files.
The information that has to go on those and their layout is not
very obvious so adding a contextual help dialogue and an example
file puts the user on the right track.

Closes #319
2025-07-20 11:03:10 +02:00
D. Berge
487c297747 Add database upgrade file 37.
Fixes database upgrade file 35.
2025-07-19 12:20:55 +02:00
D. Berge
cfa771a830 Skip info for saillines with no preplot.
It may happen that the sailline info file has data for more lines
than are actually in the preplot (e.g., if importing a reduced
preplot file). In this case, we don't want a constraint violation
error due to missing corresponding lines in `preplot_lines` so we
check for that and only import lines that do exist in `preplot_lines`
2025-07-19 11:31:52 +02:00
D. Berge
3905e6f5d8 Update OpenAPI specification 2025-07-13 11:15:41 +02:00
D. Berge
2657c42dcc Fix export statement 2025-07-13 11:13:31 +02:00
D. Berge
63e6af545a Fix typo 2025-07-13 11:13:09 +02:00
D. Berge
d6fb7404b1 Adapt version.get middleware to new permissions approach 2025-07-13 00:07:52 +02:00
D. Berge
8188766a81 Refactor access to info table.
To adapt to new permissions system.
2025-07-13 00:07:05 +02:00
D. Berge
b7ae657137 Add auth.operations middleware.
Adds an array of allowed operations on given context to the request
under `req.user.operations`.
2025-07-13 00:02:48 +02:00
D. Berge
1295ec2ee3 Add function to return allowed operations in a given context 2025-07-13 00:01:15 +02:00
D. Berge
7c6d3fe5ee Check permissions against vessel if not on a project endpoint 2025-07-12 16:49:10 +02:00
D. Berge
15570e0f3d orgAccess(user, null, op) returns vessel access permissions.
If instead of a project ID, orgAccess receives `null`, it will
check permissions against the installation's own vessel rather
than against a specific project.
2025-07-12 16:47:39 +02:00
D. Berge
d551e67042 Add vesselOrganisations() function 2025-07-12 16:47:10 +02:00
D. Berge
6b216f7406 Add library function to retrieve vessel information.
In the `keystore` table, we now store information for our own
vessel (usually, where the Dougal server is installed). This
is an access function to retrieve that information.

The info stored for the vessel looks like this:

```yaml
type: vessel
key: ego
data:
    imo: 9631890
    mmsi: 257419000
    name: Havila Charisma
    contacts:
        -
            name: HC OM
            phone: tel:+47123456789
            email: hc.om@magseisfairfield.com
    organisations:
        Havila Charisma:
            read: true
            write: true
            edit: true
```
2025-07-12 16:42:28 +02:00
D. Berge
a7e02c526b Add function argument defaults.
This will cause the function to return a safe (false) value
rather than erroring.
2025-07-12 16:40:18 +02:00
D. Berge
55855d66e9 Remove dead code 2025-07-12 12:14:12 +02:00
D. Berge
ae79d90fef Remove obsolete Vuex getters 2025-07-12 11:31:38 +02:00
D. Berge
c8b2047483 Refactor client-side access checks.
Go from a Vuex based to a mixin based approach.
2025-07-12 11:31:38 +02:00
D. Berge
d21cde20fc Add mixin to check access rights client-side.
This replaces the Vuex getters approach (writeaccess, adminaccess)
which, as access rights are no longer global but dependent on each
project's settings, are no longer appropriate.
2025-07-12 11:31:38 +02:00
D. Berge
10580ea3ec Create server-side organisations module 2025-07-12 11:31:38 +02:00
D. Berge
25f83d1eb3 Share access() function between front and back end.
This is so that any changes to the code are reflected on both sides.
2025-07-12 11:31:38 +02:00
D. Berge
dc294b5b50 Change prefix used for storing user preferences.
The `role` value no longer exists; we're replacing that with the
user ID.
2025-07-12 11:31:38 +02:00
D. Berge
b035d3481c Ensure users have at least read access to most endpoints 2025-07-11 22:49:28 +02:00
D. Berge
ca4a14ffd9 Use new orgs based method for authorisation 2025-07-11 22:48:44 +02:00
D. Berge
d77f7f66db Refresh organisations cache on project update 2025-07-11 22:48:06 +02:00
D. Berge
6b6f545b9f Filter list of projects to only those readable by user 2025-07-11 22:47:32 +02:00
D. Berge
bdf62e2d8b Show project orgs in projects list 2025-07-11 22:46:47 +02:00
D. Berge
1895168889 Show user orgs in avatar 2025-07-11 22:46:47 +02:00
D. Berge
8c875ea2f9 Return organisations as part of the projects listing 2025-07-11 22:46:47 +02:00
D. Berge
addbe2d572 Refactor user authentication code to use database 2025-07-11 22:46:47 +02:00
D. Berge
85f092b9e1 Upgrade minimum required database version 2025-07-11 22:46:47 +02:00
D. Berge
eb99d74e4a Add database upgrade file 38.
Adds default user (superuser).
2025-07-11 22:46:47 +02:00
D. Berge
e65afdcaa1 Add database upgrade file 37.
Creates `keystore` table.
2025-07-11 22:46:47 +02:00
D. Berge
0b7e9e1d01 Add functions to check operation access via organisations 2025-07-11 22:46:47 +02:00
D. Berge
9ad17de4cb Merge branch '76-add-configuration-gui' into 'devel'
Resolve "Add configuration GUI"

Closes #294, #295, #296, #298, #76, #297, #129, #313, #312, #305, #264, #307, #303, #300, #301, #302, #290, #291, #292, and #293

See merge request wgp/dougal/software!17
2025-07-09 18:11:50 +00:00
D. Berge
071fd7438b Reimplement <dougal-project-settings-online-line-name-format/>.
Closes #297.
2025-07-09 16:45:35 +02:00
D. Berge
9cc21ba06a Mark planned reshoots as such 2025-07-09 16:40:48 +02:00
D. Berge
712b20c596 Add API endpoint to retrieve line name properties.
This will be needed by the configuration GUI.
2025-07-09 16:38:41 +02:00
D. Berge
8bbe3aee70 Make planned line names configurable.
Line names are made up based on:

* Certain properties defined by the system
* Values assigned to those properties either by the system
  or by the user (line number, sequence, attempt, etc.)
* A line format specification configured by the user for each
  project (`online.line.lineNameBuilder.fields`)

Closes #129.
2025-07-09 16:30:26 +02:00
D. Berge
dc22bb95fd Disable 'no_fire' test due to changes in Smartsource software 2025-07-03 11:48:42 +02:00
D. Berge
0ef2e60d15 Do not fail on non-existing property 2025-07-03 11:44:52 +02:00
D. Berge
289d50d9c1 Update caniuse database 2025-06-27 00:23:37 +02:00
D. Berge
3189a06d75 Change tcpdump flags to capture on any interface 2025-06-27 00:05:23 +02:00
D. Berge
9ef551db76 Fix logical→physical path conversion for absolute paths 2025-06-26 23:57:19 +02:00
D. Berge
e6669026fa Add validation messages for final P1/11 lineNameInfo 2025-06-26 23:48:35 +02:00
D. Berge
12082b91a3 Add validation messages for raw P1/11 lineNameInfo 2025-06-26 23:47:38 +02:00
D. Berge
7db9155899 Add default fields for raw P1/11 lineNameInfo 2025-06-26 23:46:49 +02:00
D. Berge
f8692afad3 Add named slots to DougalProjectSettingsFileMatchingParameters.
Used to display error or information messages.
2025-06-26 23:41:51 +02:00
D. Berge
028cab5188 Add default fields for raw P1/11 lineNameInfo 2025-06-26 23:41:00 +02:00
D. Berge
fc73fbfb9f Add GUI for editing lineNameInfo of final P1/111 2025-06-26 23:40:28 +02:00
D. Berge
96a8d3689a Add defaults for lineNameInfo text and fields 2025-06-26 23:39:47 +02:00
D. Berge
7a7106e735 Default to text if no field type is specified. 2024-08-22 18:44:24 +02:00
D. Berge
d5a10ca273 Allow also str as a field type specifier 2024-08-22 18:43:57 +02:00
D. Berge
e398f2d3cd Stop attempt at sending a spurious 404.
This was resulting in a bunch of "headers already sent" messages.
2024-05-09 14:18:56 +02:00
D. Berge
d154e75797 Add more info to diagnostics endpoint 2024-05-09 14:02:18 +02:00
D. Berge
af0df23cc4 Add diagnostics API endpoint.
Only available with write access and above.

Reports used and available filesystem sizes and database space
usage.
2024-05-08 16:27:32 +02:00
D. Berge
ec26285e53 Refresh caniuse's browser statistics.
In other words:

npx update-browserslist-db@latest
2024-05-06 12:13:06 +02:00
D. Berge
83b3ec5103 Add database upgrade file 36.
Fixes #313.
2024-05-06 12:06:30 +02:00
D. Berge
86aaade428 Add database upgrade file 35.
Fixes #312.
2024-05-06 11:11:55 +02:00
D. Berge
fbb4e1efaf Fix insert statement in database upgrade file 33.
This makes it possible to run the script on an already upgraded
schema.
2024-05-06 11:10:46 +02:00
D. Berge
73fb7a5053 Make script executable 2024-05-05 19:35:19 +02:00
D. Berge
bc5dfe9c2a Add fixed strings support to parse_line 2024-05-05 19:34:01 +02:00
D. Berge
524420d945 Support lineNameInfo in SmartSource header imports.
Closes #305.
2024-05-04 17:41:14 +02:00
D. Berge
e48c734ea9 Support lineNameInfo in final P1/11 imports 2024-05-04 17:35:05 +02:00
D. Berge
5aaad01238 Support lineNameInfo in raw P1/11 imports 2024-05-04 17:33:50 +02:00
D. Berge
90782c1b09 Support import of preplot lines ancillary information.
Closes #264.
2024-05-04 17:32:30 +02:00
D. Berge
4368cb8571 Update minimum required database schema to 0.5.0 2024-05-04 17:30:34 +02:00
D. Berge
40bc1f9293 Fix log sequence view 2024-05-04 17:29:31 +02:00
D. Berge
8c6eefed97 Add support for fixed strings to file parameters widget 2024-05-04 17:27:55 +02:00
D. Berge
59971a43fe Support fixed text in <dougal-fixed-string-decoder/> 2024-05-04 17:27:08 +02:00
D. Berge
a2a5a783a3 Add <dougal-fixed-string-text/> component.
It's similar to <dougal-fixed-string-decoder-field/> but it handles
static strings.

Used to match, e.g., file names, where certain parts of the name
are expected to contain a specific string (such as a project prefix
and the like).
2024-05-04 17:24:52 +02:00
D. Berge
d3bdeff8df Add database upgrade file 34.
Closes #307.
2024-05-04 17:17:27 +02:00
D. Berge
4a2bed257d Add database upgrade file 33.
Related to #264.
Closes #303.
2024-05-04 17:15:51 +02:00
D. Berge
995e0b9f81 Remove unused import 2024-05-03 11:46:16 +02:00
D. Berge
3488c8bf4d Support preplot imports in additional formats.
This adds support for SPS v1, SPS v2.1, custom fixed-width,
CSV and custom sailline info preplot imports.
2024-05-03 11:44:32 +02:00
D. Berge
7e1023f6e8 Support import of delimited formats.
This supports CSV and similar formats, as well as sailline
imports, which is a CSV file with a specific set of column
definitions.

Does not yet support P111 import (for which there is an
implementation already).
2024-05-03 11:42:20 +02:00
D. Berge
41e058ac64 Add TODO comment 2024-05-03 11:41:50 +02:00
D. Berge
2086133109 Fix bool casting.
A true value is any text that starts with `t` (case insensitive) or
any non-zero integer.
2024-05-03 11:40:53 +02:00
D. Berge
bb70cf1a3d Check enum keys against text instead of cast value 2024-05-03 11:40:21 +02:00
D. Berge
be0d7b269f Support import of various fixed-width formats.
This supports reading: SPSv1, SPSv2.1, P190 and custom
fixed-width formats. Supports skipping lines by startswith()
matching or by complete match (e.g., "EOF").

Closes #300 (SPS v1)
Closes #301 (SPS v2.1)
Closes #302 (P1/90)
2024-05-01 10:47:54 +02:00
D. Berge
934b921f69 Include schema in returned survey configuration object 2024-05-01 10:40:04 +02:00
D. Berge
c20b3b64c7 Fix symlink target 2024-05-01 10:40:04 +02:00
D. Berge
8ec918bc7c Rename "SPS" preplots import to "legacy fixed width" 2024-05-01 10:40:04 +02:00
D. Berge
6fa0f8e659 Expose Buffer to Webpack configuration 2024-05-01 10:40:04 +02:00
D. Berge
a9f93cfd17 Add save / upload controls to configuration toolbar 2024-05-01 10:40:04 +02:00
D. Berge
9785f4541b Add dirty configuration flag 2024-05-01 10:40:04 +02:00
D. Berge
62ab06b4a7 Refactor configuration GUI.
Another refactoring. What we're doing now is eliminating the
need to save individually on each section. Configuration changes
are done directly on the local configuration and then the local
configuration is saved, downloaded or discarded in one go.
2024-05-01 10:40:04 +02:00
D. Berge
c7270febfc Add project configurations upgrade script.
This script rewrites project configurations to take into account
various structural changes in the configuration object.

The script can be run without arguments, in which case it will
upgrade the configuration of every project found in the local
database, or one or more project IDs can be given as command
line arguments in order to upgrade only those projects.

Configurations that have already been upgraded will not be
touched.

For other projects, both the original and new configurations will
be saved to file in the current directory, as well as two scripts:
one commits the new configuration to the server, and the other
restores the original one.

This script connects directly to the database, using the same
mechanisms as the Dougal server. It is recommended to run it
locally on the server host.

The restore scripts use the HTTP API, which they expect to find
on http://localhost:3000/api, so it is also recommended to run
them in the local server.

Closes #290.
Closes #291.
Closes #292.
Closes #293.
Closes #294.
Closes #295.
Closes #296.
Closes #297.
Closes #298.
2024-05-01 10:40:04 +02:00
D. Berge
2dffd93cfe Simplify expression 2024-05-01 10:40:04 +02:00
D. Berge
867a534910 Remove debugging statements 2024-05-01 10:40:04 +02:00
D. Berge
60aaaf9e04 Aesthetic changes 2024-05-01 10:40:04 +02:00
D. Berge
b64a99ab19 Add option to upload the configuration to the server 2024-05-01 10:40:04 +02:00
D. Berge
69fce0e0dc Add option to load configuration from local file.
Supports both JSON and YAML.
2024-05-01 10:40:04 +02:00
D. Berge
8dd971ffec Add option to save local copy of configuration to local file 2024-05-01 10:40:04 +02:00
D. Berge
fd84eb1ebb Add "advanced configuration" view.
This view shows a tree view of the raw JSON configuration
object, allowing the user to add / edit / delete any properties
whatsoever. It is semi-hidden behind a context menu. The user
has to right-click on the header of the left-hand column showing
the list of configuration sections and then click on the red
"advanced configuration" button. In the advanced configuration
view there is another button to go back to normal configuration.

It is only possible to save / refresh the configuration from the
normal view.
2024-05-01 10:40:04 +02:00
D. Berge
53b4213a05 Fix configuration not being refreshed 2024-05-01 10:40:04 +02:00
D. Berge
3fbc266809 Add configuration GUI for SEG-Y near field hydrophone data 2024-05-01 10:40:04 +02:00
D. Berge
66a758d91f Refactor Smartsource header reading configuration.
- Use a fixed width name parser rather than regular expressions
- Move the Smartsource header files configuration to a different
  part of the configuration object.
2024-05-01 10:40:04 +02:00
D. Berge
6cebf376d0 Refactor <dougal-project-settings-final-pending/> 2024-05-01 10:40:04 +02:00
D. Berge
02adbdf530 Refactor <dougal-project-settings-final-p111/>.
Use fixed width name decoding instead of regular expression.
2024-05-01 10:40:04 +02:00
D. Berge
2357381ee6 Refactor <dougal-project-settings-raw-p111/>.
Use fixed width name decoding instead of regular expression.
2024-05-01 10:40:04 +02:00
D. Berge
5245e6a135 Refactor <dougal-project-settings-raw-ntbp/> 2024-05-01 10:40:04 +02:00
D. Berge
d93b8f8a9c Refactor <dougal-project-settings-file-matching-parameters/>.
It uses a fixed width format specification instead of a regular
expression.

It shows a preview of what parts of the string are decoded as what.
2024-05-01 10:40:04 +02:00
D. Berge
8b47fc4753 Refactor <dougal-project-settings-asaqc/>.
- Uses the new interface with the main component
- Changes the path where ASAQC related settings are saved,
  from $.asaqc to $.cloud.asaqc.
- Adds field for configuring the subscription key.
2024-05-01 10:40:04 +02:00
D. Berge
a0b3568a10 Refactor <dougal-project-settings-online-line-name-format/>.
Uses a fixed width specification instead of regular expressions
to decode line names from the navigation system.
2024-05-01 10:40:04 +02:00
D. Berge
8895a948cf Refactor preplots configuration GUI.
This introduces a number of changes, most notably an easier way
to specify fixed width formats and support for configuring
multiple import options (actual SPSv1, SPSv2.1, P1/90, CSV, …)

Note that only the configuration GUI is done, support for actually
importing those formats has not been implemented as of this commit.
2024-05-01 10:40:04 +02:00
D. Berge
afe04f5693 Refactor <dougal-project-settings-binning/> 2024-05-01 10:40:04 +02:00
D. Berge
c3a56bf7e2 Refactor <dougal-project-settings-production/> 2024-05-01 10:40:04 +02:00
D. Berge
18fcf42bc3 Refactor <dougal-project-settings-planner/> 2024-05-01 10:40:04 +02:00
D. Berge
ad48ac9998 Refactor <dougal-project-settings-file-path/> 2024-05-01 10:40:04 +02:00
D. Berge
7ab6be5c67 Refactor <dougal-project-settings-binning/> 2024-05-01 10:40:04 +02:00
D. Berge
2f56d377c5 Refactor <dougal-project-settings-groups/> 2024-05-01 10:40:04 +02:00
D. Berge
d1c041995d Refactor <dougal-project-settings-name-id/> 2024-05-01 10:40:04 +02:00
D. Berge
399e86be87 Refactor the interface between ProjectSettings and subcomponents.
This is still not set in stone and not fully consistent from one
subcomponent to another, but the general idea is that instead of
passing everything in one property via v-model we use v-bind
instead with a variable list of props depending on the needs of
the subcomponent.

We listen for @input and a new @merge event in order to apply
any changes to the *local* configuration. The local configuration
then needs to be uploaded to the server via a separate step. We
might change this in a later commit, so that changes made in
subcomponents are automatically applied to the local configuration.
2024-05-01 10:40:04 +02:00
D. Berge
13f68d7314 deepValue with an empty path returns the object itself 2024-05-01 10:40:04 +02:00
D. Berge
80de0c1bb0 Modify deepSet() to allow appending to arrays 2024-05-01 10:40:04 +02:00
D. Berge
26a487aa47 Add json-builder component.
It displays a JSON object as a <v-treeview/>, with editing
capabilities.
2024-05-01 10:40:04 +02:00
D. Berge
53e7a06a18 Add Vue watch mixin to update a variable on changes to another.
To be used where adding .sync to props is not convenient for
one reason or another.
2024-05-01 10:40:04 +02:00
D. Berge
efe64f0a8c Implement PUT method for project configuration endpoint.
In short:

POST creates a new project
PUT overwrites a project configuration with a new one
PATCH merges the request body with the existing configuration
2024-05-01 10:40:04 +02:00
D. Berge
313e9687bd Bring all the lib/utils from the frontend to the backend.
The idea being that eventually we will symlink the lib/utils
directory so that the same routines are available on both the
frontend and the backend.
2024-05-01 10:40:04 +02:00
D. Berge
09fb653812 Strip whitespace 2024-05-01 10:40:04 +02:00
D. Berge
0137bd84d5 Add Vue component for configuring sailline CSV imports.
Sailline CSV imports are related to issue #264. Not yet implemented
server-side.
2024-05-01 10:40:04 +02:00
D. Berge
f82f2c78c7 Add Vue component for handling delimited strings.
<dougal-delimited-string-decoder/> is intended for providing a UI
for configuring text-delimited import settings (such as CSV imports).
2024-05-01 10:40:04 +02:00
D. Berge
9f1fc3d19c Make Vue component reusable.
This converts <dougal-fixed-width-format/> into a more reusable
<dougal-fixed-string-decoder/> component.
2024-05-01 10:40:04 +02:00
D. Berge
873d7cfea7 Add utility Vue components.
This commit adds <dougal-field-content/> and
<dougal-field-content-dialog/>, which can be
used to configure certain properties of an
object. Intended for use while editing project
configurations.
2024-05-01 10:40:04 +02:00
D. Berge
2fa9d99eeb Add YAML frontend dependency.
To download / upload configurations.
2024-05-01 10:40:04 +02:00
D. Berge
12b28cbb8d Add csv-parse dependency to frontend.
Also requires a Buffer polyfill.
2024-05-01 10:40:04 +02:00
D. Berge
436a9b8289 Add utility function to truncate long strings 2024-05-01 10:40:04 +02:00
D. Berge
b3dbc0f417 Add utility function to create HSL colours 2024-05-01 10:40:04 +02:00
D. Berge
6d417a9272 Add utility functions.
The functions are:

- deepMerge()   Merge two objects
- deepCompare() Loose deep comparison
- deepEqual()   Strict deep comparison
- deepSet()     Set nested object property value
- deepValue()   Retrive nested object property value
2024-05-01 10:40:04 +02:00
D. Berge
b74419f770 Reuse deepMerge.js from the backend libs 2024-05-01 10:40:04 +02:00
D. Berge
cae57e2a64 Ensure we get a fresh response 2024-05-01 10:40:04 +02:00
D. Berge
cd739e603f Fix configuration object data corruption 2024-05-01 10:40:04 +02:00
D. Berge
beeba966dd Cope with empty result 2024-05-01 10:40:04 +02:00
D. Berge
544c4ead76 Remove trailing slash from URL 2024-05-01 10:40:04 +02:00
D. Berge
4486fc4afc Improve contrast of new group item 2024-05-01 10:40:04 +02:00
D. Berge
a55d2cc6fc Update database templates to v0.4.5 2024-05-01 10:40:04 +02:00
D. Berge
402a3f9cce Add code for a ‘new project’ button to project list navigation.
This is currently disabled though (value in route/index.js is
commented out) as it is not possible at the moment to create
new projects fully from scratch from the frontend. See comment
on previous commit.

NB: projects may be created fully from scratch by making an API
request with a suitable YAML / JSON configuration file, thusly:

curl -vs "https://[hostname]/api/project" -X POST \
    -H "Content-Type: application/yaml"
    --data-binary @/path/to/configuration.yaml
2024-05-01 10:40:04 +02:00
D. Berge
1801fdb052 Add project creation details component.
This is not usable at the moment as the backend requires even
more details, such as binning parameters, which this dialogue
does not provide.

It might be a matter of relaxing the rules on the backend or,
perhaps more likely, rethinking the project creation / editing
frontend. Maybe refactoring the frontend so that saves are done
in one go for the whole configuration as opposed to piecemeal as
currently done might make it easier to work on a configuration
(especially a new one) fully offline.
2024-05-01 10:40:04 +02:00
D. Berge
be904d8a00 Add ‘groups’ column to ProjectList table 2024-05-01 10:40:04 +02:00
D. Berge
2131cdf0c1 Add project cloning option to ProjectList 2024-05-01 10:40:04 +02:00
D. Berge
15242de2d9 Add configuration settings tab to project navigation bar.
Only for admin users.
2024-05-01 10:40:04 +02:00
D. Berge
b4aed52976 Add project settings cloning component.
Asks for the new ID, name and root file path.
2024-05-01 10:40:04 +02:00
D. Berge
1b85b5cd4b Remove cloning control stub.
Cloning takes place from the project list, we don't really need
to duplicate that functionality here for the time being.
2024-05-01 10:40:04 +02:00
D. Berge
f157f49312 Use project list from Vuex 2024-05-01 10:40:04 +02:00
D. Berge
3d42ce6fbc Add context menu with ‘Edit project settings’ option 2024-05-01 10:40:04 +02:00
D. Berge
4595dddc24 Add ProjectSettings view 2024-05-01 10:40:04 +02:00
D. Berge
642f5a7585 Add project configuration components.
The configuration settings are quite complex so we divide the
GUI into modular components.
2024-05-01 10:40:04 +02:00
D. Berge
e7c29ba14c Add file browsing components.
Essentially, these are a file selection dialog.
2024-05-01 10:40:04 +02:00
D. Berge
d919fb12db Add control to filter out archived projects in ProjectList 2024-05-01 10:40:04 +02:00
D. Berge
c21f9c239e Merge branch '304-refresh-authentication-status-for-connected-users' into 'devel'
Resolve "Refresh authentication status for connected users"

Closes #304

See merge request wgp/dougal/software!56
2024-05-01 08:23:14 +00:00
D. Berge
2fb1c5fdcc Process incoming JWT WebSocket messages 2024-05-01 10:20:09 +02:00
D. Berge
c6b99563d9 Send a request for new credentials at regular intervals.
Every five minutes, a message is sent via WebSocket to ask the
server for a refreshed JWT token.
2024-05-01 10:19:00 +02:00
D. Berge
76a90df768 Send "Authorization: Bearer …" on API requests.
We need this because we might have more recent credentials than
those in the cookie store.
2024-05-01 10:15:26 +02:00
D. Berge
ea8ea12429 Add JWT Vuex getter 2024-05-01 10:14:55 +02:00
D. Berge
7bd2319cd7 Allow setting credentials directly via the Vuex store.
Until now, credentials were set indirectly by reading the browser's
cookie store. This change allows us to receive credentials via other
mechanisms, notably WebSockets.
2024-05-01 10:13:14 +02:00
D. Berge
a9270157ea Process JWT messages over WebSockets 2024-05-01 10:06:35 +02:00
D. Berge
d2f94dbb88 Refactor JWT token verification 2024-05-01 10:05:48 +02:00
D. Berge
1056122fff Fix missing parenthesis 2024-04-28 18:37:30 +02:00
D. Berge
9bd0aca18f Add debugging statements to ETag middleware 2023-11-04 10:45:50 +01:00
D. Berge
60932300c1 Ensure that project is defined.
Which would not be in the case of the `project` event unless we
look at the `new` and `old` properties.
2023-11-04 10:45:50 +01:00
D. Berge
12307b7ae6 Refactor ETag watcher to use path-to-regexp.
Simplifies the code and makes it easier to look at.
2023-11-04 10:45:50 +01:00
D. Berge
ceeaa4a8f3 Add path-to-regexp depedency. 2023-11-04 10:45:50 +01:00
D. Berge
3da54f9334 Always request a fresh response from the config endpoint 2023-11-04 10:36:58 +01:00
D. Berge
4c612ffe0a Take etc/www/config.json out of revision control.
This file contains site-specific configuration. Instead, an
example.config.json is now provided.
2023-11-03 21:30:22 +01:00
D. Berge
7076b51a25 Add auth.access.role(roles) higher order middleware 2023-11-03 21:22:02 +01:00
D. Berge
fe5ca06060 Return a JSON response for all 404s.
When an endpoint did not exist, the default expressjs response
was being returned, which is text/html.
2023-11-03 18:52:31 +01:00
D. Berge
71467dddf9 Report also request body size, if applicable 2023-11-03 18:51:43 +01:00
D. Berge
246f01efbe Report requested URLs and user data in debug mode 2023-11-02 23:52:15 +01:00
D. Berge
68bf853594 Add comments 2023-11-02 23:51:53 +01:00
D. Berge
4a18cb8a81 Remove useless code 2023-11-02 23:51:05 +01:00
D. Berge
c615727acf Don't require authentication for the /version endpoint.
It will still hide the `db` and `os` values from non-admins though.
2023-11-02 23:48:46 +01:00
D. Berge
2e21526fca Simplify versions handling 2023-11-02 23:47:13 +01:00
D. Berge
3709070985 Add a start script to package.json.
So that `npm start` will run.
2023-11-02 23:40:41 +01:00
D. Berge
2ac963aa4f Update redoc-cli version 2023-11-02 20:23:04 +01:00
D. Berge
db7b385d66 Don't show logo on graph toolbar 2023-11-02 20:05:18 +01:00
D. Berge
d91a1b1302 Do show a legend for shots with final data.
Fixup for commit e4607a095b.
2023-11-02 20:03:57 +01:00
D. Berge
fa031d5fc9 Update API specification 2023-11-02 19:59:02 +01:00
D. Berge
620d5ccf47 Add /version API endpoint 2023-11-02 19:48:30 +01:00
D. Berge
f0fa2b75d5 Add more details to version() return value 2023-11-02 19:46:44 +01:00
D. Berge
46bb207dfb Remove debugging artefact 2023-11-02 15:32:48 +01:00
D. Berge
f7a386d179 Merge branch '287-the-project_summary-view-is-too-slow' into 'devel'
Resolve "The `project_summary` view is too slow"

Closes #287

See merge request wgp/dougal/software!55
2023-11-02 14:29:35 +00:00
D. Berge
e4607a095b Don't show a legend for points without gun data 2023-11-02 15:27:06 +01:00
D. Berge
4b0d42390f Show a message if plotting a c-o with no final data 2023-11-02 15:26:37 +01:00
D. Berge
114e41557f Don't show graph if there is no data 2023-11-02 15:25:45 +01:00
D. Berge
e605320503 Refresh shotlog on sequence change 2023-11-02 15:24:45 +01:00
D. Berge
6606c7a6c1 Do not show c-o for raw sequences 2023-11-02 15:24:28 +01:00
D. Berge
e3bf671a49 Also monitor raw_shots events 2023-11-02 15:23:45 +01:00
D. Berge
3e08dfd45b Ignore shots without source data 2023-11-02 13:51:07 +01:00
D. Berge
f968cf3b3c Bump the required database schema version 2023-11-02 13:25:34 +01:00
D. Berge
b148ed2368 Add refresh-project-summary periodic task.
It listens for events that might indicate that the project_summary
materialised view needs to be refreshed and schedules a refresh.

Refreshes are throttled to a maximum of one every throttlePeriod
milliseconds so that things don't get too crazy for instance when
importing a lot of data.
2023-11-02 13:25:34 +01:00
D. Berge
cb35e340e1 Change the periodic tasks interface to support an init() function.
When a task needs to keep state, it can do so via a closure.
2023-11-02 13:25:34 +01:00
D. Berge
6c00f16b7e Add a refresh() method to db.project.summary 2023-11-02 13:25:34 +01:00
D. Berge
ca8dd68d10 Add database upgrade file 32. 2023-11-02 13:25:34 +01:00
D. Berge
656f776262 Do not cache any responses containing cookies 2023-11-02 13:24:40 +01:00
D. Berge
e1b40547f1 Deconflict Webpack's dev-server websocket.
It uses /ws which is the same path Dougal uses.

Changing Dougal's path to something more imaginative requires
reconfiguring Nginx though.
2023-11-02 13:20:41 +01:00
D. Berge
98021441bc Revert "Rename websocket /ws → /dougal-websocket."
This reverts commit 4a8d3a99c1.
2023-11-02 13:18:37 +01:00
D. Berge
4a8d3a99c1 Rename websocket /ws → /dougal-websocket.
The Webpack dev server seems to really like /ws and ignores any
attempts to set a different path, so we just rename our websocket
instead.
2023-11-02 11:39:49 +01:00
D. Berge
7dee457fa1 Add FIXME comment 2023-11-02 11:38:27 +01:00
D. Berge
bccac446e5 Add polyfill for node:path so we can get basename() 2023-11-01 15:12:30 +01:00
D. Berge
535b3bcc12 Adapt to new Marked interface 2023-11-01 14:59:10 +01:00
D. Berge
11e84a7e72 Upgrade Node packages for frontend build 2023-10-31 22:41:37 +01:00
D. Berge
5ef55a9d8e Add dark mode support for QC graphs.
Closes #159.
2023-10-31 21:33:30 +01:00
D. Berge
f53e15df93 Merge branch '265-add-shotlog' into 'devel'
Resolve "Add shotlog"

Closes #265

See merge request wgp/dougal/software!54
2023-10-31 18:51:16 +00:00
D. Berge
cf887b7852 Upgrade Plotly 2023-10-31 19:49:40 +01:00
D. Berge
a917976a3a Fix styling of the download button in the event log 2023-10-31 19:48:42 +01:00
D. Berge
c201229891 Add a link from the event log to the shotlog 2023-10-31 19:47:18 +01:00
D. Berge
7ac997cd7d Add a link from the shotlog to the event log 2023-10-31 19:46:40 +01:00
D. Berge
08e6c4a2de Restyle sequence list links to shot and event logs 2023-10-31 19:46:07 +01:00
D. Berge
2c21f8f7ef Add some graphics to the shotlog 2023-10-31 19:15:43 +01:00
D. Berge
a76aefe418 Add graphing component for shotpoint timing visualisations.
It operates in one of these modes:

* facet="bars" (default): shows a barplot.

* facet="lines": shows a lineplot.

* facet="area": shows a lineplot where the area between the
  line(s) and y=0 is filled with a colour.
2023-10-31 19:11:09 +01:00
D. Berge
8d825fc53b Add graphing component for inline/crossline visualisations.
The component takes a list of shots and operates in one of these
modes:

* facet="scatter" (default): shows a scatterplot of every shot
  where x is the crossline and y are the inline errors.

* facet="crossline": shows a line graph depicting the crossline
  error along the line, x is the shotpoint and y is the crossline
  error.

* facet="2dhist": shows the crossline error as a 2D histogram.
  The z value is the density (number of samples in the bin) and
  x and y are the bin centres.

* facet="c-o": provided that the shot data comes from a final
  sequence, shows the difference between final and raw positions
  along the inline / crossline axes.
2023-10-31 19:04:19 +01:00
D. Berge
b039a5f1fd Expand unpack() to be more expressive 2023-10-31 19:03:10 +01:00
D. Berge
5c1218e95e Add link to shotlog from sequence list 2023-10-31 10:34:51 +01:00
D. Berge
1bb5e2a41d Implement the SequenceSummary component as a shotlog 2023-10-31 10:33:56 +01:00
D. Berge
1576b121e6 Force dev frontend to run on IPv4 2023-10-29 20:46:11 +01:00
D. Berge
a06cdde449 Fix mapGetters() in ProjectList 2023-10-29 20:38:58 +01:00
D. Berge
121131e910 Add control to filter out archived projects in ProjectList 2023-10-29 20:38:58 +01:00
D. Berge
9136e9655d Return the archived project configuration value.
This value indicates whether the project should receive updates
from external sources.
2023-10-29 20:38:58 +01:00
D. Berge
c646944886 Add download control for all events to Log view.
The log until now offered a download control only in sequence
view mode. With this change, download is available (albeit not
in all formats) for the entire log.

To download events for a selection of dates (constrained by day,
week or month) the user should use the Calendar view instead.
2023-10-29 20:38:58 +01:00
D. Berge
0e664fc095 Add download control to Calendar view.
Will download the event log for the currently selected calendar
period (day, week, month, …) in a choice of formats.
2023-10-29 20:38:58 +01:00
D. Berge
1498891004 Update API specification 2023-10-29 20:38:58 +01:00
D. Berge
89cb237f8d Use setContentDisposition() 2023-10-29 20:38:58 +01:00
D. Berge
3386c57670 Add setContentDisposition() utility function.
It checks if a request has a `filename` search parameter and if
so, set the Content-Disposition response header to attachment
with the provided filename.
2023-10-29 20:38:58 +01:00
D. Berge
7285de5ec4 Import projectConfiguration getter into Log view.
So that we can get hold of `events.presetRemarks`.
2023-10-29 20:38:58 +01:00
D. Berge
a95059f5e5 Change navigation bar aesthetics 2023-10-29 20:38:58 +01:00
D. Berge
1ac81c34ce Add structured values support to <dougal-event-edit/> 2023-10-29 20:38:58 +01:00
D. Berge
22387ba215 Add <dougal-event-select/> component.
This is a refactoring of <dougal-event-edit/> focusing on the
preset remark selection combo box and context menu with the
addition of support for structured values via the
<dougal-event-properties/> component.
2023-10-29 20:38:58 +01:00
D. Berge
b77d41e952 Add <dougal-event-properties/> component.
It provides an input form for structured values.
2023-10-29 20:38:58 +01:00
D. Berge
aeecb7db7d Replace hard-coded navigation bar with dynamic alternative.
Navigation bars should be coded as their own component and
added to the meta section of the Vue Router's route(s) in which
they are to be used.
2023-10-29 20:38:58 +01:00
D. Berge
ac9a683135 Add <v-app-bar/> extension info to router.
The idea is for the <dougal-navigation/> component to dynamically
load the extension, if any, defined in the route's meta attribute.
2023-10-29 20:38:58 +01:00
D. Berge
17a58f1396 Create new <v-app-bar/> extension component.
Intended to be used in the v-slot:extension slot of <v-app-bar/>.
2023-10-29 20:38:58 +01:00
D. Berge
b2a97a1987 Fix typo in SQL query.
Fixes #284.
2023-10-29 20:38:58 +01:00
D. Berge
f684e3e8d6 Track changes to projects list.
The main application component listens for project events and
updates the Vuex store when a project related event is seen.
2023-10-29 20:38:58 +01:00
D. Berge
219425245f Add Vuex projects module.
Not to be confused with the `project` module.

`projects`: lists all available projects
`project`: lists details for one project
2023-10-29 20:38:58 +01:00
D. Berge
31419e860e Update API specification 2023-10-28 11:21:43 +02:00
D. Berge
65481d3086 Return the group project configuration value.
This value (expected to be an array of strings) can be used to
link related projects.

Closes #186.
2023-10-28 11:15:26 +02:00
D. Berge
d64a1fcee7 Merge branch '280-consolidate-handling-of-project-data-in-the-project-component-rather-than-in-individual-tab' into 'devel'
Resolve "Consolidate handling of project data in the Project component rather than in individual tab components"

Closes #280

See merge request wgp/dougal/software!48
2023-10-25 14:28:06 +00:00
D. Berge
2365789d48 Merge branch '281-modify-planner-endpoint-s' into 'devel'
Resolve "Modify planner endpoint(s)"

Closes #281

See merge request wgp/dougal/software!49
2023-10-25 14:26:23 +00:00
D. Berge
4c2a2617a1 Adapt Project component to Vuex use for fetching data.
The Project component is now responsible for fetching and
updating the data used by most project tabs, with the
exception of ProjectSummary, QC, Graphs and Map. It is
also the only one listening for server events and reacting
to them.

Individual tabs are still responsible for sending data to
the server, at least for the time being.
2023-10-25 16:19:18 +02:00
D. Berge
5021888d03 Adapt Log component to Vuex use for fetching data 2023-10-25 16:18:41 +02:00
D. Berge
bf633f7fdf Refactor Calendar component.
- adapts it to Vuex use for fetching data
- displays extra events in 4-day and day views
- allows classifying by event label in 4-day and day views
2023-10-25 16:16:01 +02:00
D. Berge
847f49ad7c Adapt SequenceList component to Vuex use for fetching data 2023-10-25 16:15:17 +02:00
D. Berge
171feb9dd2 Adapt Plan component to Vuex use for fetching data 2023-10-25 16:14:45 +02:00
D. Berge
503a0de12f Adapt LineList component to Vuex use for fetching data 2023-10-25 16:13:56 +02:00
D. Berge
cf89a43f64 Add project configuration to Vuex store 2023-10-25 16:11:24 +02:00
D. Berge
680e376ed1 Add Vuex sequence module 2023-10-25 16:11:24 +02:00
D. Berge
a26974670a Add Vuex plan module 2023-10-25 16:11:24 +02:00
D. Berge
16a6cb59dc Add Vuex line module 2023-10-25 16:11:24 +02:00
D. Berge
829e206831 Add Vuex label module 2023-10-25 09:59:04 +02:00
D. Berge
83244fcd1a Add Vuex event module 2023-10-25 09:51:28 +02:00
D. Berge
d9a6c77d0c Update API description 2023-10-23 19:25:48 +02:00
D. Berge
b5aafe42ad Add YAML support to events GET endpoint 2023-10-23 19:24:03 +02:00
D. Berge
025f3f774d Add YAML and CSV support to project configuration GET endpoint 2023-10-23 19:22:50 +02:00
D. Berge
f26e746c2b Add flatEntries utility 2023-10-23 18:58:37 +02:00
D. Berge
39eaf17121 Update API description 2023-10-23 18:48:05 +02:00
D. Berge
1bb06938b1 Add CSV export handler to main event log endpoint.
Closes #245.
2023-10-23 17:28:30 +02:00
D. Berge
851369a0b4 Invalidate planner endpoint cache when setting remarks 2023-10-23 14:58:41 +02:00
D. Berge
5065d62443 Update planner endpoint documentation 2023-10-23 14:57:27 +02:00
D. Berge
2d1e1e9532 Modify return payload of planner endpoint.
Previous:

[
  { sequence: …},
  { sequence: …},
  …
]

Current:

{
  remarks: "…",
  sequences: [
    { sequence: …},
    { sequence: …},
    …
  ]
}
2023-10-23 14:53:32 +02:00
D. Berge
051049581a Merge branch '278-rewrite-events-queue' into 'devel'
Resolve "Rewrite events queue"

Closes #278

See merge request wgp/dougal/software!46
2023-10-17 10:28:21 +00:00
D. Berge
da5ae18b0b Merge branch '269-support-requesting-a-partial-update-from-the-events-log-endpoint' into devel 2023-10-17 12:27:31 +02:00
D. Berge
ac9353c101 Add database upgrade file 31. 2023-10-17 12:27:06 +02:00
D. Berge
c4c5c44bf1 Add comment 2023-10-17 12:20:19 +02:00
D. Berge
d3659ebf02 Merge branch '269-support-requesting-a-partial-update-from-the-events-log-endpoint' into 'devel'
Resolve "Support requesting a partial update from the events log endpoint"

Closes #269

See merge request wgp/dougal/software!47
2023-10-17 10:18:41 +00:00
D. Berge
6b5070e634 Add event changes API endpoint description 2023-10-17 12:15:41 +02:00
D. Berge
09ff96ceee Add events change API endpoint 2023-10-17 11:15:36 +02:00
D. Berge
f231acf109 Add events change middleware 2023-10-17 11:15:06 +02:00
D. Berge
e576e1662c Add library function returning event changes after given epoch 2023-10-17 11:13:58 +02:00
D. Berge
6a21ddd1cd Rewrite events listener and handlers.
The events listener now uses a proper self-consuming queue and
the event handlers have been rewritten accordingly.

The way this works is that running init() on the handlers
library instantiates the handlers and returns two higher-order
functions, prepare() and despatch(). A call to the latter of
these is appended to the queue with each new incoming event.

The handlers have access to a context object (ctx) which may be
used to persist data between calls and/or exchange data between
handlers. This is used notably to give the handlers access to
project configurations, which are themselves refreshed by a
project configuration change handler (DetectProjectConfigurationChange).
2023-10-14 20:53:42 +02:00
D. Berge
c1e35b2459 Cache project configuration details.
This avoids requesting the project configurations on every single
incoming message. A listener refreshes the data on configuration
changes.
2023-10-14 20:11:18 +02:00
D. Berge
eee2a96029 Modify logging statements 2023-10-14 20:10:46 +02:00
D. Berge
6f5e5a4d20 Fix bug for shortcut when there is only one candidate project 2023-10-14 20:09:07 +02:00
D. Berge
9e73cb7e00 Clean up on SIGINT, SIGHUP signals 2023-10-14 20:07:19 +02:00
D. Berge
d7ab4eec7c Run some tasks periodically from the main process.
This reduces reliance on crontab jobs.
2023-10-14 20:06:38 +02:00
D. Berge
cdd96a4bc7 Don't bother trying to kill the child process on exit.
As the exit signal handler does not allow asynchronous tasks and
besides, killing the parent should kill its children too.
2023-10-14 20:02:54 +02:00
D. Berge
39a21766b6 Exit on start up errors 2023-10-14 20:02:04 +02:00
D. Berge
0e33c18b5c Replace console.log() with debug library calls 2023-10-14 19:57:57 +02:00
D. Berge
7f411ac7dd Add queue libraries.
A basic queue implementation and one that consumes its items
automatically until empty.
2023-10-14 19:56:56 +02:00
D. Berge
ed1da11c9d Add helper function to purge notifications 2023-10-14 19:54:34 +02:00
D. Berge
66ec28dd83 Refactor DB notifications listener to support large payloads.
The listener will automatically retrieve the full payload
before passing it on to event handlers.
2023-10-14 18:33:41 +02:00
D. Berge
b928d96774 Add database upgrade file 30. 2023-10-14 18:29:28 +02:00
D. Berge
73335f9c1e Merge branch '136-add-line-change-time-log-pseudoevent' into 'devel'
Resolve "Add line change time log pseudoevent"

Closes #136

See merge request wgp/dougal/software!45
2023-10-04 12:50:49 +00:00
D. Berge
7b6b81dbc5 Add more debugging statements 2023-10-04 14:50:12 +02:00
D. Berge
2e11c574c2 Throw rather than return.
Otherwise the finally {} block won't run.
2023-10-04 14:49:35 +02:00
D. Berge
d07565807c Do not retry immediately 2023-10-04 14:49:09 +02:00
D. Berge
6eccbf215a There should be no need to await.
That is because the queue handler will, in theory, only ever
process one event at a time.
2023-09-30 21:29:15 +02:00
D. Berge
8abc05f04e Remove dead code 2023-09-30 21:29:15 +02:00
D. Berge
8f587467f9 Add comment 2023-09-30 21:29:15 +02:00
D. Berge
3d7a91c7ff Rewrite ReportLineChangeTime 2023-09-30 21:29:15 +02:00
D. Berge
3fd408074c Support passing array in opts.sequences to event.list() 2023-09-30 21:29:15 +02:00
D. Berge
f71cbd8f51 Add unique utility function 2023-09-30 21:29:15 +02:00
D. Berge
915df8ac16 Add handler for creation of line change time events 2023-09-30 21:29:15 +02:00
D. Berge
d5ecb08a2d Allow switching to event entry by time.
A ‘Timed’ button is shown when a new (not edited) event is in
the event entry dialogue and the event has sequence and/or
point values. Pressing the button deletes the sequence/point
information and sets the date and time fields to current time.

Fixes #277.
2023-09-30 21:26:32 +02:00
D. Berge
9388cd4861 Make daily_tasks work with new project configuration 2023-09-30 20:36:46 +02:00
D. Berge
180590b411 Mark events as being automatically generated 2023-09-30 01:42:27 +02:00
D. Berge
4ec37539bf Add utils to work with Postgres ranges 2023-09-30 01:41:45 +02:00
D. Berge
8755fe01b6 Refactor events.list.
The SQL has been simplified and the following changes made:

- The `sequence` argument now can only take one individual
  sequence, not a list of sequences.
- A new `sequences` argument is recognised. It takes a list
  of sequences (as a string).
- A new `label` argument is recognised. It takes a label
  name and returns events containing that label.
- A new `jpq` argument is recognised. It takes a JSONPath
  string which is applied to `meta` with jsonb_path_exists(),
  returning any events for which the JSON path expression
  matches.
2023-09-30 01:37:22 +02:00
D. Berge
0bfe54e0c2 Include the meta attribute when posting events 2023-09-30 01:36:18 +02:00
D. Berge
29bc689b84 Merge branch '276-add-soft-start-event-detection' into 'devel'
Resolve "Add soft start event detection"

Closes #276

See merge request wgp/dougal/software!44
2023-09-29 15:02:57 +00:00
D. Berge
65682febc7 Add soft start and full volume events detection 2023-09-29 17:02:03 +02:00
D. Berge
d408665d62 Write meta info to automatic events 2023-09-29 16:49:27 +02:00
D. Berge
64fceb0a01 Merge branch '127-sol-eol-events-not-being-inserted-in-the-log-automatically' into 'devel'
Resolve "SOL / EOL events not being inserted in the log automatically"

Closes #127

See merge request wgp/dougal/software!43
2023-09-29 14:17:46 +00:00
D. Berge
ab58e578c9 Use DEBUG library throughout 2023-09-29 16:16:33 +02:00
D. Berge
0e58b8fa5b Refactor code to identify candidate schemas.
As part of the refactoring, we took into account a slight payload
format change (project configuration details are under the `data`
attribute).
2023-09-29 16:13:35 +02:00
D. Berge
99ac082f00 Use common naming convention both online and offline 2023-09-29 16:11:44 +02:00
D. Berge
4d3fddc051 Merge branch '274-use-new-db-event-notifier-for-event-processing-handlers' into 'devel'
Resolve "Use new DB event notifier for event processing handlers"

Closes #275, #230, and #274

See merge request wgp/dougal/software!42
2023-09-29 14:03:00 +00:00
D. Berge
42456439a9 Remove ad-hoc notifier 2023-09-29 15:59:12 +02:00
D. Berge
ee0c0e7308 Replace ad-hoc notifier with pg-listen based version 2023-09-29 15:59:12 +02:00
D. Berge
998c272bf8 Add var/* to .gitignore 2023-09-29 15:59:12 +02:00
D. Berge
daddd1f0e8 Add script to rewrite packet captures IP and MAC addresses.
Closes #230.
2023-09-29 15:58:59 +02:00
D. Berge
17f20535cb Cope with fragmented UDP packets.
Fixes #275.

Use this as the systemd unit file to run as a service:

[Unit]
Description=Dougal Network Packet Capture
After=network.target remote-fs.target nss-lookup.target

[Service]
ExecStart=/srv/dougal/software/sbin/packet-capture.sh
ExecStop=/bin/kill -s QUIT $MAINPID
Restart=always
User=root
Group=users
Environment=PATH=/usr/bin:/usr/sbin:/usr/local/bin
Environment=INS_HOST=172.31.10.254
WorkingDirectory=/srv/dougal/software/var/
SyslogIdentifier=dougal.pcap

[Install]
WantedBy=multi-user.target
2023-09-29 15:28:11 +02:00
D. Berge
0829ea3ea1 Save a copy of the headers not the original.
Otherwise ExpressJS will complain about trying to modify
headers that have already been sent.
2023-09-24 12:17:16 +02:00
D. Berge
2069d9c3d7 Remove dead code 2023-09-24 12:15:06 +02:00
D. Berge
8a2d526c50 Ignore schema attribute in PATCH payload.
Fixes #273.
2023-09-24 12:14:20 +02:00
D. Berge
8ad96d6f73 Ensure that requiredFields is always defined.
Otherwise, `Object.entries(requiredFields)` may fail.
2023-09-24 11:59:26 +02:00
D. Berge
947faf8c05 Provide default glob specification for map layer imports 2023-09-24 11:34:10 +02:00
D. Berge
a948556455 Fail gracefully if map layer data does not exist.
Fixes #272.
2023-09-24 11:33:32 +02:00
D. Berge
835384b730 Apply path conversion to QC definition files 2023-09-23 22:50:09 +02:00
D. Berge
c5b93794f4 Move path conversion to general utilities 2023-09-23 13:44:53 +02:00
D. Berge
056cd32f0e Merge branch '271-qc-results-not-being-refreshed' into 'devel'
Resolve "QC results not being refreshed"

Closes #271

See merge request wgp/dougal/software!41
2023-09-18 10:08:35 +00:00
D. Berge
49bb413110 Merge branch '270-real-time-interface-stopped-working' into 'devel'
Resolve "Real-time interface stopped working"

Closes #270

See merge request wgp/dougal/software!40
2023-09-18 10:08:27 +00:00
D. Berge
ceccc42050 Don't cache response ETags for QC endpoints 2023-09-18 12:06:38 +02:00
D. Berge
aa3379e1c6 Adapt RTI save function to refactored project configuration in DB 2023-09-18 11:58:55 +02:00
D. Berge
4063af0e25 Merge branch '268-inline-crossline-errors-no-longer-being-calculated' into 'devel'
Resolve "Inline/crossline errors no longer being calculated"

Closes #268

See merge request wgp/dougal/software!39
2023-09-15 18:03:51 +00:00
D. Berge
d53e6060a4 Update database templates to v0.4.2 2023-09-15 20:01:54 +02:00
D. Berge
85d8fc8cc0 Update required database version 2023-09-15 14:22:22 +02:00
D. Berge
0fe40b1839 Add missing require 2023-09-15 14:22:02 +02:00
D. Berge
21de4b757f Add database upgrade file 29. 2023-09-15 12:52:42 +02:00
D. Berge
96cdbb2cff Add database upgrade file 28. 2023-09-15 12:52:27 +02:00
D. Berge
d531643b58 Add database upgrade file 27. 2023-09-15 12:52:06 +02:00
D. Berge
a1779ef488 Do not cache /navdata endpoint responses 2023-09-14 13:20:16 +02:00
D. Berge
5239dece1e Do not cache GIS endpoint responses 2023-09-14 13:19:57 +02:00
D. Berge
a7d7837816 Allow only admins to patch project configurations 2023-09-14 13:19:16 +02:00
D. Berge
ebcfc7df47 Allow everyone to access project configuration.
This is necessary as it is requested by various parts of the
frontend.

Consider more granular access control.
2023-09-14 13:17:28 +02:00
D. Berge
dc4b9002fe Adapt QC endpoints to new configuration APIs 2023-09-14 13:15:59 +02:00
D. Berge
33618b6b82 Do not cache Set-Cookie headers 2023-09-14 13:13:47 +02:00
D. Berge
597d407acc Adapt QC view to new label payload from API 2023-09-14 13:13:18 +02:00
D. Berge
6162a5bdee Stop importing P1/90s until scripts are upgraded.
See #266.
2023-09-14 13:09:38 +02:00
D. Berge
696bbf7a17 Take etc/config.yaml out of revision control.
This file contains site-specific configuration. Instead, an
example config.yaml is now provided.
2023-09-14 13:07:33 +02:00
D. Berge
821fcf0922 Add wx forecast info to plan (experiment).
Use https://open-meteo.com/ as a weather forecast provider.

This code is intended for demonstration only, not for
production purposes.

(issue #157)


(cherry picked from commit cc4bce1356)
2023-09-13 20:04:15 +00:00
D. Berge
b1712d838f Merge branch '245-export-event-log-as-csv' into 'devel'
Resolve "Export event log as CSV"

Closes #245

See merge request wgp/dougal/software!38
2023-09-13 20:02:07 +00:00
D. Berge
895b865505 Expose CSV output option in user interface 2023-09-13 21:59:57 +02:00
D. Berge
5a2af5c49e Add CSV output option for events log 2023-09-13 21:58:06 +02:00
D. Berge
24658f4017 Allow patching project name if no name is already set 2023-09-13 16:13:43 +02:00
D. Berge
6707cda75e Ignore case when patching configuration ID 2023-09-13 16:13:12 +02:00
D. Berge
1302a31b3d Improve formatting of layer alert 2023-09-13 13:00:19 +02:00
D. Berge
871a1e8f3a Don't show alert if layer is empty (but log to console) 2023-09-13 12:59:47 +02:00
D. Berge
04e1144bab Simplify expression 2023-09-13 12:59:24 +02:00
D. Berge
6312d94f3e Add support for user layer tooltips and popups 2023-09-13 12:58:44 +02:00
D. Berge
ed91026319 Add tolltip and popup options to map layer configuration.
- `tooltip` takes the name of a GeoJSON property that will be
  shown in a tooltip when hovering the mouse over a feature.

- `popup` can take either the name of a property as above, or
  the boolean value `true`. In the latter case, a table of all
  the feature's properties will be shown when clicking on the
  feature. In the former case, only the value of the designated
  property will be shown.
2023-09-13 12:55:37 +02:00
D. Berge
441a4e296d Import map layers from the runner 2023-09-13 11:24:04 +02:00
D. Berge
c33c3f61df Alert the user if a map layer is too big 2023-09-13 11:22:49 +02:00
D. Berge
2cc293b724 Do not fail trying to restore state for non-existing layers 2023-09-13 11:22:05 +02:00
D. Berge
ee129b2faa Merge branch '114-allow-users-to-show-arbitrary-geojson-on-the-map' into 'devel'
Resolve "Allow users to show arbitrary GeoJSON on the map."

Closes #114

See merge request wgp/dougal/software!37
2023-09-12 17:34:51 +00:00
D. Berge
98d9b3b093 Adapt Map view to new label payload from API 2023-09-12 19:31:58 +02:00
D. Berge
57b9b420f8 Show an error if a layer is too large.
The map view limits the size of layers (both user and regular) in
order to keep the system responsive, as Leaflet is not great at
handling large layers.
2023-09-12 19:29:02 +02:00
D. Berge
9e73f2603a Implement user layers on map view.
The user layers are defined in the project configuration under
`imports.map.layers`.

Multiple layers may be defined and each layer may consist of one
or more GeoJSON files. Files are retrieved via the /files/ API
endpoint.
2023-09-12 19:29:02 +02:00
D. Berge
707889be42 Refactor layer API endpoint and database functions.
- A single get() function is used both to list all available
  layers, if no layer name is given, or a single layer.
- The database no longer holds the actual layer contents,
  only the path to the layer file(s), so the list() function
  is now redundant as we return the full payload in every case.
- The /gis/layer and /gis/layer/:name endpoints now have the same
  payload structure.
2023-09-12 19:29:02 +02:00
D. Berge
f9a70e0145 Refactor map layer importer.
- Now a layer may consist of a path pointing to a directory plus a
  glob, or a path pointing directly to a single file.
- If a file already exists in the database, check if the layer
  name has changed and if so, update it.
- Do not import the actual file contents, as the path is enough
  (it can be retrieved via the /file/:path API endpoint).
2023-09-12 11:05:10 +02:00
D. Berge
b71489cee1 Add get_file_data() function to datastore 2023-09-12 11:04:37 +02:00
D. Berge
0a9bde5f10 Add Background layer to map.
This is a limited implementation of layer backgrounds. The API
supports an arbitrary number of arbitrarily named background
layers, but for the time being we only recognise one background
layer named `Background` and of GeoJSON type.

Certain properties, such a colour/color, opacity, etc., are
recognised and applied as feature styles. If not, a default
style is used.
2023-09-11 10:17:10 +02:00
D. Berge
36d5862375 Add map layer middleware and API endpoints 2023-09-11 10:15:19 +02:00
D. Berge
398c702004 Add map layer functions to database interface 2023-09-11 10:12:46 +02:00
D. Berge
b2d1798338 Add map layer importer 2023-09-11 10:00:59 +02:00
D. Berge
4f165b0c83 Revert behaviour of new jwt-express version.
Fixes breakage introduced in commit
cd00f8b995.
2023-09-10 14:09:01 +02:00
D. Berge
2c86944a51 Merge branch '262-preset-remarks-and-labels-no-longer-working-with-api-0-4-0' into 'devel'
Resolve "Preset remarks and labels no longer working with API 0.4.0"

Closes #262

See merge request wgp/dougal/software!36
2023-09-10 10:10:22 +00:00
D. Berge
5fc51de7d8 Adapt Log view to new configuration endpoint in the API 2023-09-10 12:01:59 +02:00
D. Berge
158e0fb788 Adapt Log view to new label payload from API 2023-09-10 12:01:30 +02:00
D. Berge
941d15c1bc Return labels directly from project configuration.
NOTE: This is a breaking API change. Before this it returned an
array of labels, now it returns an object.
2023-09-10 11:59:38 +02:00
D. Berge
cd00f8b995 Breaking-change Node package udpates (server) 2023-09-10 11:49:56 +02:00
D. Berge
44515f8e78 Non-breaking Node package updates (server) 2023-09-09 20:54:04 +02:00
D. Berge
54fbc76da5 Merge branch '261-wrong-missing-shots-value-in-sequence-summary' into 'devel'
Resolve "Wrong missing shots value in sequence summary"

Closes #261

See merge request wgp/dougal/software!35
2023-09-09 18:46:33 +00:00
D. Berge
c1b5196134 Update database templates to v0.3.12.
Incorporates fix for bug #261.
2023-09-09 20:45:11 +02:00
D. Berge
fb3d3be546 Trailing slash in API call results in "unauthorised" error.
No idea why.
2023-09-09 20:39:49 +02:00
D. Berge
8e11e242ed Remove NODE_OPTIONS from scripts.
Node version 18 does not seem to like it.
2023-09-09 20:37:08 +02:00
D. Berge
8a815ce3ef Add database upgrade file 26. 2023-09-09 20:23:20 +02:00
D. Berge
91076a50ad Show API error messages if available 2023-09-09 17:00:32 +02:00
D. Berge
e624dcdde0 Support async API callbacks in Vuex action 2023-09-09 16:59:43 +02:00
D. Berge
a25676122c Update material design icons dependency 2023-09-09 16:58:44 +02:00
D. Berge
e4dfbe2c9a Update minimum node version to 18 2023-09-09 16:57:20 +02:00
D. Berge
78fb34d049 Update the API version number 2023-09-09 16:56:52 +02:00
D. Berge
38c4125f4f Support patching values out of the configuration.
A configuration patch having keys with null values will result
in those keys being removed from the configuration.
2023-09-09 16:53:42 +02:00
D. Berge
04d6cbafe3 Use refactored database API in QC executable 2023-09-09 16:42:30 +02:00
D. Berge
e6319172d8 Fix typo in QC executable 2023-09-09 16:42:00 +02:00
D. Berge
5230ff63e3 Use new database API calls for configuration 2023-09-09 16:39:53 +02:00
D. Berge
2b364bbff7 Make bin script compatible with Python 3.6 2023-09-09 16:38:51 +02:00
D. Berge
c4b330b2bb Don't cache ETags for /files/ endpoint.
As we have no practical way of invalidating those.
2023-09-02 16:06:31 +02:00
D. Berge
308eda6342 Use ETag middleware 2023-09-02 15:29:39 +02:00
D. Berge
e8b1cb27f1 Add ETag middleware 2023-09-02 15:29:24 +02:00
D. Berge
ed14fd0ced Add notifier to DB library 2023-09-02 15:28:17 +02:00
D. Berge
fb10e56487 Add pg-listen dependency 2023-09-02 15:26:53 +02:00
D. Berge
56ed0cbc79 Merge branch '246-add-endpoint-for-creating-a-new-survey' into 'devel'
Resolve "Add endpoint for creating a new survey"

Closes #179, #174, and #246

See merge request wgp/dougal/software!29
2023-09-02 13:10:56 +00:00
D. Berge
227e588782 Merge branch '248-dougal-event-log-takes-a-long-time-to-register-new-events' into 'devel'
Resolve "Dougal event log takes a long time to register new events"

Closes #248

See merge request wgp/dougal/software!30
2023-09-02 13:09:56 +00:00
D. Berge
53f2108e37 Adapt import functions to use logical paths 2023-08-30 14:56:09 +02:00
D. Berge
ccf4bbf547 Use logical paths rather than physical 2023-08-30 14:54:27 +02:00
D. Berge
c99a625b60 Add function to retrieve survey configurations from DB.
As the survey definitions will no longer be stored in files
under etc/surveys/ but directly on the database, this
function replaces configuration.surveys()
2023-08-30 14:27:15 +02:00
D. Berge
25ab623328 Add functions for translating paths.
The Dougal database will no longer store physical file paths
but rather logical ones, relative to (config.yaml).imports.paths.

These functions translate between physical and logical paths.
2023-08-30 14:17:47 +02:00
D. Berge
455888bdac Fix method signature 2023-08-30 14:16:08 +02:00
D. Berge
b650ece0ee Add import.paths key to config.yaml.
Used to tell Dougal which parts of the filesystem may be
accessed by users via the API (more specifically, via the
`/files/` API endpoints).
2023-08-30 14:12:07 +02:00
D. Berge
2cb96c0252 Let user download P1s from the Sequences tab 2023-08-30 14:08:28 +02:00
D. Berge
70cf59bb4c Add API files endpoint.
Used to download files. It relies on `imports.paths` being set
appropriately in `etc/config.yaml` to indicate which parts of
the filesystem are accessible to users via Dougal.
2023-08-30 13:51:31 +02:00
D. Berge
ec03627119 Remove logging statements 2023-08-30 13:48:26 +02:00
D. Berge
675c19f060 Fix whitespace 2023-08-30 13:47:51 +02:00
D. Berge
6721b1b96b Add API endpoint for patching a project 2023-08-30 13:47:02 +02:00
D. Berge
b4f23822c4 Fix db.configuration.get() 2023-08-30 13:43:36 +02:00
D. Berge
3dd1aaeddb Fix indentation 2023-08-30 13:42:25 +02:00
D. Berge
1e593e6d75 Clean up if project creation fails 2023-08-30 13:41:28 +02:00
D. Berge
ddbcb90c1f Add deepMerge() utility function 2023-08-30 13:37:01 +02:00
D. Berge
229fdf20ef Reload the project list on insert or deletion 2023-08-23 19:35:12 +02:00
D. Berge
72e67d0e5d React to project deletion 2023-08-23 19:34:47 +02:00
D. Berge
b26fefbc37 Show user-friendly message if a project cannot be found 2023-08-23 19:33:50 +02:00
D. Berge
04e0482f60 Vuex: add getters for project info 2023-08-23 19:31:22 +02:00
D. Berge
62f90846a8 Vuex: clear project variables if project not found 2023-08-23 19:30:52 +02:00
D. Berge
1f9c0e56fe Default npm run serve to 0.0.0.0 2023-08-23 19:29:13 +02:00
D. Berge
fe9d3563a0 Add API endpoint to delete a project 2023-08-23 19:26:27 +02:00
D. Berge
38a07dffc6 Add API endpoint to retrieve project configuration.
Only available to users with at least `write` access.
2023-08-23 19:26:27 +02:00
D. Berge
1a6500308f Add API endpoint for creating a project 2023-08-23 19:26:27 +02:00
D. Berge
6033b45ed3 Refactor API middleware.
The middleware naming is kept consistent with the HTTP verb that
they handle.
2023-08-23 19:17:20 +02:00
D. Berge
33edef6647 Use modified body-parser accepting YAML 2023-08-23 19:12:44 +02:00
D. Berge
8f8e8b7492 Implement db.project.delete().
Removes a project from the database, but only if the project is
empty, i.e., it has no preplots, no lines and no events in its
log (except deleted).
2023-08-21 14:50:20 +02:00
D. Berge
ab5e3198aa Add DB function to return project configuration.
NOTE: mostly redundant with db.configuration.get(),
see previous commit.
2023-08-21 14:49:22 +02:00
D. Berge
60ed850d2d Change db.configuration.get() to use database.
NOTE: this endpoint is redundant with db.project.configuration.get()
except for its ability to return a partial tree.

TODO: merge this with db.project.configuration.get().
2023-08-21 14:46:51 +02:00
D. Berge
63b9cc5b16 Add database functions for project creation.
Instead of storing the project configuration in a YAML file
under `etc/surveys/`, this is now stored in public.projects.meta.

NOTE: as of this commit, the runner scripts (`bin/*.py`) are not
aware of this change and they will keep looking for project info
under `etc/surveys`. This means that projects created directly
in the database will be invisible to Dougal until the runner
scripts are changed accordingly.
2023-08-21 14:39:45 +02:00
D. Berge
f2edd2bec5 Refactor project DB functions.
The old db.project.list() function is now db.project.get()
and the old db.project.get() is not db.project.summary.get().

If a project does not exist, db.project.summary.get() now
throws a 404 rather than a database error.
2023-08-21 14:36:02 +02:00
D. Berge
44ad59130f Add pid2schema function.
Translates a project ID into a database schema name.
2023-08-21 14:31:23 +02:00
D. Berge
ecbb1e04ee Do not disable event edit form while loading data 2023-08-20 20:07:29 +02:00
D. Berge
7cb2c3ef49 Add comment 2023-05-30 17:20:35 +02:00
D. Berge
ff4f6bfd78 Ensure that we're connected to the Dougal database 2023-05-30 17:19:23 +02:00
D. Berge
fbe0cb5efa Default the API prefix to /api 2023-05-18 18:34:10 +02:00
D. Berge
aa7cbed611 Do not require authentication to query API version 2023-05-18 18:32:26 +02:00
D. Berge
89061f6411 Print port and prefix on startup 2023-05-18 18:30:48 +02:00
D. Berge
838883d8a3 Update caniuse version (package-lock) 2023-05-18 18:29:44 +02:00
D. Berge
cd196f1acd Add option needed for node v16+ support.
Note: this may cause the client *not* to start on node versions
less than 16.
2023-05-18 18:28:36 +02:00
D. Berge
a2b894fceb Fix class instantiation error.
Closes #252.
2023-05-12 15:32:12 +02:00
D. Berge
c3b3a4c70f Remove lock file if inhibiting tasks 2023-04-11 20:50:59 +02:00
D. Berge
8118641231 Do not run tasks if required mounts are not present.
A configuration item `imports.mounts` is added to
`etc/config.yaml`. This should be a list of paths
which must be non-empty. If any of the paths in that
list is empty, runner.sh will abort.

Closes #200.
2023-04-10 15:04:12 +02:00
D. Berge
6d8a199a3c Allow setting IP to listen on.
Running on bare metal, 127.0.0.1 is a sensible choice of address
to bind on, but that is not the case when running inside a
container, so we add the ability to choose which IP to listen on.

This can be given via the environment variable HTTP_HOST when
starting the server or, if used as a module, as the second
argument of the start(port, host, path) function.
2023-04-07 09:04:51 +02:00
D. Berge
5a44e20a5b Do not error out of npm install if postinstall fails.
The postinstall script will (rightly) return non-zero if the API
docs cannot be built, but this creates a problem when building a
container (Docker) image. In that case, we expect the postinstall
to fail, as the required files (spec/*) will not have been copied
into the image when `npm install` is run.

By adding an explicit OR clause we allow postinstall to end
gracefully whether or not the API docs have been built.
2023-04-05 12:32:35 +02:00
D. Berge
374739c133 Request ancillary library via HTTPS rather than SSH.
Otherwise newer versions of npm will choke during `npm install` due
to this npm bug: https://github.com/npm/cli/issues/2610
2023-04-04 18:16:50 +02:00
D. Berge
992205da4a Add event handler for midnight shot detection.
This event handler checks if there is an UTC date jump between
consecutive shots. If a jump is detected, it sends to new entries
to the event log, for the last shot and first shot of the previous
and current dates, respectively.

Fixes #223.
2022-05-15 14:06:18 +02:00
D. Berge
f5e08c68af Replace console output by debug functions 2022-05-15 13:38:47 +02:00
D. Berge
105fee0623 Update database schema template.
* midnight_shots uses final_shots rather than raw_shots
* log_midnight_shots removes stale midnight events
2022-05-15 13:28:15 +02:00
D. Berge
aff974c03f Modify log_midnight_shots() to remove non-relevant midnight shots.
Those shots could for instance have been removed to a line edit.
2022-05-15 13:20:01 +02:00
D. Berge
bada6dc2e2 Modify DB upgrade file 25 to use final_shots 2022-05-15 13:19:01 +02:00
D. Berge
d5aac5e84d Add network packet capture script.
The idea is to capture incoming real-time data to be able to
replay it later on development systems, e.g., for new development
or troubleshooting.

Issue #230.
2022-05-14 11:57:09 +02:00
D. Berge
3577a2ba4a Change sass version specification in package-lock.
Should stop `npm install` from modifying it.
2022-05-13 19:19:45 +02:00
D. Berge
04df9f41cc Add script for daily database housekeeping.
The script bin/daily_tasks.py is intended to be run shortly after
midnight every day (e.g., via cron).

At the moment it inserts any missing LDSP / FDSP events. It can
be extended with other tasks as needed either by expanding
Datastore.run_daily_tasks() or by adding to bin/daily_tasks.py.

Fixes #223.
2022-05-13 19:04:39 +02:00
D. Berge
fdb5e0cbab Update database templates to v0.3.12.
* Add midnight_shots view
* Add log_midnight_shots() procedure
2022-05-13 18:55:43 +02:00
D. Berge
4b832babfd Add database upgrade file 25.
This defines a midnight_shots view and a log_midnight_shots() procedure
(with some overloads). The view returns all points straddling midnight
UTC and belonging to the same sequence (so last shot of the day and
first shot of the next day).

The procedure inserts the corresponding events (optionally constrained
by an earliest and a latest date) in the event log, unless the events
already exist.

Related to #223.
2022-05-13 18:53:32 +02:00
D. Berge
cc3a9b4e5c Fix comment 2022-05-13 18:52:40 +02:00
D. Berge
da5a708760 Add controls to hide accepted / all QC events.
Closes #218, #219.
2022-05-13 18:17:02 +02:00
D. Berge
9834e85eb9 Add placeholders hint, for discoverability 2022-05-12 23:38:39 +02:00
D. Berge
e19601218a Cope with schema not being detected 2022-05-12 23:04:07 +02:00
D. Berge
15c56d3f64 Use new debug() functions 2022-05-12 23:03:31 +02:00
D. Berge
632dd1ee75 Add placeholder replacement to log housekeeping tasks 2022-05-12 22:57:23 +02:00
D. Berge
aeff5a491d Update required database schema 2022-05-12 22:55:08 +02:00
D. Berge
9179c9332d Revert "Show sequence comments in log page"
This reverts commit a5db9c984b.

Fixes #210.
2022-05-12 22:46:11 +02:00
D. Berge
bb5de9a00e Update insert_event.py.
This script now works with the new event log.

Fixes #234. Midnight positions can be added via a cronjob such
as:

$DOUGAL_ROOT/BIN/insert_event.py -t "$(date -I) 00:00:00Z" \
    -l Daily -l Prod \
    "Midnight position: @DMS@ (@POS@)"
2022-05-12 22:21:38 +02:00
D. Berge
d6b985fcd2 Replace event remarks placeholders in API data.
Events being created or edited via the API now call
replace_placeholders() making it possible to use
shortcuts to enter some event-related information.

See #229 for details.
2022-05-12 22:10:33 +02:00
D. Berge
3ed8339aa3 Migrate more console messages to debug() 2022-05-12 22:09:08 +02:00
D. Berge
1b925502bc Update database templates to v0.3.11.
* Redefine augment_event_data()
2022-05-12 21:59:38 +02:00
D. Berge
7cea79a9be Add database upgrade file 24.
This redefines augment_event_data() to use interpolation rather than
nearest neighbour. It now takes an argument indicating the maximum
allowed interpolation timespan. An overload with a default of ten
minutes is also provided, as an in situ replacement for the previous
version.

The ten minute default is based on Triggerfish headers behaviour seen
on crew 248 during soft starts.
2022-05-12 21:58:51 +02:00
D. Berge
69f565f357 Update database templates to v0.3.10.
* Add interpolate_geometry_from_tstamp()
2022-05-12 21:52:31 +02:00
D. Berge
23de4d00d7 Add database upgrade file 23.
This defines a interpolate_geometry_from_tstamp(), taking a timestamp
and a maximum timespan in seconds. It will then interpolate a position
at the exact timestamp based on data from real_time_inputs, provided
that the effective interpolation timespan does not exceed the maximum
requested.

Fixes #243.
2022-05-12 21:51:00 +02:00
D. Berge
1992efe914 Update database templates to v0.3.9.
* Add replace_placeholders()
* Add scan_placeholders() procedure
2022-05-12 21:47:38 +02:00
D. Berge
c7f3f565cd Add database upgrade file 22.
This defines a replace_placeholders() function, taking as arguments
a text string and either a timestamp or a sequence / point pair. It
uses the latter arguments to find metadata from which it can extract
relevant information and replace it into the text string wherever the
appropriate placeholders appear. For instance, given a call such as
replace_placeholders('The position is @POS@', NULL, 11, 2600) it will
replace '@POS@' with the position of point 2600 in sequence 11, if it
exists (or leave the placeholder untouched otherwise).

A scan_placeholders() procedure is also defined, which calls the above
function on the entire event log.

Fixes #229.
2022-05-12 21:45:56 +02:00
D. Berge
1da02738b0 Update database templates to v0.3.8.
* Add event_position()
* Add event_meta()
2022-05-12 21:40:23 +02:00
D. Berge
732d8e9be6 Add database upgrade file 21.
This adds event_position() and event_meta() functions which are used
to retrieve position or metadata, respectively, given either a timestamp
or a sequence / point pair. Intended to be used in the context of #229.
2022-05-12 21:38:28 +02:00
D. Berge
a2bd614b17 Update database templates.
* Optimise public.geometry_from_tstamp()
* Remove index on public.real_time_inputs.meta->>'tstamp'
* Fix adjust_planner()
2022-05-10 21:57:53 +02:00
D. Berge
003c833293 Add database upgrade file 20.
This updates the adjust_planner() procedure to take into account the
new events schema (the `event` view has been replaced by `event_log`).

Fixes #208.
2022-05-10 21:54:46 +02:00
D. Berge
a4c458dc16 Add database upgrade file 19.
Rewrites geometry_from_tstamp() to make it more efficient.

Fixes #241.
2022-05-10 21:52:24 +02:00
D. Berge
f7b6ca3f79 Log runner output to syslog (if so configured).
The variable DOUGAL_LOG_FACILITY must be defined in the environment
(e.g., in ~/.dougalrc) for syslog to be enabled.
2022-05-08 15:30:05 +02:00
D. Berge
a7cce69c81 Add logging statements 2022-05-08 15:26:15 +02:00
D. Berge
2b20a5d69f Update line details on reimport conflict.
To deal with misnamed lines.

Fixes #240.
2022-05-08 15:25:11 +02:00
D. Berge
4fc5d1deda Add links to first / last page.
Fixes #237.
2022-05-07 14:58:16 +02:00
D. Berge
df13343063 Colour map QC events according to their labels.
We take the first label associated with the event (if any) and use
the label's colour for the event marker. We override the colour for
QC events and use a default value for events with no labels or if
the label does not have an associated colour.
2022-05-07 12:07:03 +02:00
D. Berge
a5603cf243 Fix detection of map QC events.
Fixes #236.
2022-05-07 12:05:56 +02:00
D. Berge
b6d4236325 Make prime data stand out.
Fixes #228.
2022-05-06 18:07:09 +02:00
D. Berge
7e8f00d9f2 Explicitly label comment sections in default template 2022-05-06 17:15:09 +02:00
D. Berge
721cfb36d1 Use timestamp from message payload if it has one.
Fixes #221.
2022-05-06 15:17:10 +02:00
D. Berge
222c951e49 Add debugging to navdata/save.
To help track down #221.
2022-05-06 14:31:06 +02:00
D. Berge
45d2e56ed1 Add debug() module.
It uses https://github.com/debug-js/debug but it is meant to be
called like this:

const debug = require("DOUGAL_ROOT/debug")(__filename);

That way the calling module's path is used as the debug namespace.
2022-05-06 14:11:31 +02:00
D. Berge
c5b6c87278 Add DOUGAL_ROOT symlink to node_modules.
This can be used as a shortcut when requiring a module from deep
within the file hierarchy, e.g., instead of:

require("../../../../lib/db");

one can do:

require("DOUGAL_ROOT/lib/db");
2022-05-06 14:08:19 +02:00
D. Berge
fd37e8b8d6 Add context option to accept/unaccept QCs.
Closes #220.
2022-05-04 19:45:20 +02:00
D. Berge
ce0310d0b0 Silence error on non-existent label definition 2022-05-04 19:42:53 +02:00
D. Berge
546bc45861 Remove dead code 2022-05-04 18:35:20 +02:00
D. Berge
602f2c0a34 Merge branch '215-flag-unflag-qc-results-as-accepted' into 'devel'
Resolve "Flag / unflag QC results as accepted"

Closes #215

See merge request wgp/dougal/software!28
2022-05-04 16:32:48 +00:00
D. Berge
37de5ab223 Implement UI for flagging QCs as accepted or unaccepted 2022-05-04 18:21:42 +02:00
D. Berge
d69c6c4150 Add DougalQcAcceptance Vue.js component.
Widget for use in the QC view to show controls for accepting or
unaccepting QCs.
2022-05-04 18:20:28 +02:00
D. Berge
d80f44547b Update API description 2022-05-04 18:13:14 +02:00
D. Berge
6c8515a879 Add QC results accept/unaccept API endpoints 2022-05-04 18:11:05 +02:00
D. Berge
bb9340a0af Add QC results accept/unaccept middleware.
This middleware can only deal with shot QCs, not sequence-wide QCs.
2022-05-04 17:22:18 +02:00
D. Berge
672c14fb67 Add functions to accept/unaccept QCs.
These are only able to deal with shot QCs. At this point, sequence-wide
QCs cannot be marked as accepted.
2022-05-04 17:19:20 +02:00
D. Berge
f4ee798bf0 Implement endpoint for QC deletion.
Closes #217.
2022-05-04 17:15:28 +02:00
D. Berge
c8ef089b28 Log speed value on Hydronav error.
Related to #206.
2022-05-03 23:58:42 +02:00
D. Berge
1f6d560d7e Style log events according to online/offline status.
Strictly speaking, it doesn't consider (or know) what the shooting
status is (but see #214). All it does is colour events differently
if they have all three of: sequence, point and timestamp.

This is probably good enough for the time being to close #134.
2022-05-03 23:42:58 +02:00
D. Berge
f37e07796c Change description of QC test.
It's not an error but only a warning.
2022-05-03 17:27:34 +02:00
D. Berge
349c052db0 Use all sequences to build QC tree.
Fixes #213.
2022-05-03 17:23:50 +02:00
D. Berge
1c291db6c6 Add database upgrade file 18.
* Adds label_in_sequence() function

NOTE: This function is already defined in schema-template.sql but
seemingly never got pushed into production.

Fixes #211.
2022-05-02 13:40:33 +02:00
D. Berge
f46fd4b6bc Cope with non-existing configuration paths.
Fixes #212.
2022-05-02 13:15:41 +02:00
D. Berge
10883eb1a6 Check for invalid speed values in Hydronav header.
Related to #206. If this is indeed what is causing the alerts,
we will change the logic so that it simply logs (or ignores)
invalid speeds rather than throwing.
2022-05-02 13:09:43 +02:00
D. Berge
af6e419aab Run QCs from runner.
When importing an old project, the first QC run could take a while
and cause a bit of backlog, but during normal shooting it is expected
that it will finish quite quickly (and this is monitored anyway).
2022-05-01 21:26:10 +02:00
D. Berge
6516896bae Disable system imports in runner.
They're not really used. Will probably remove at a later date.
2022-05-01 21:24:56 +02:00
D. Berge
c495dce27d Don't show event history widget for guests.
NOTE: guests still do have access to the relevant API endpoint.
In theory, a persistent and computer literate guest user could
visit the API endpoint directly and retrieve the edit history.
As the edit history may need to be given to users who otherwise
do not have write access, it is considered quite acceptable to
allow guest users to access the endpoint.

Closes #194.
2022-05-01 21:20:52 +02:00
D. Berge
40d96230d2 Adjust planner times from runner.
Fixes #167.
2022-05-01 20:27:19 +02:00
D. Berge
d607b4618a Merge branch '182-periodically-scan-the-events-table-for-missing-information' into 'devel'
Resolve "Periodically scan the events table for missing information"

Closes #182

See merge request wgp/dougal/software!26
2022-05-01 18:24:35 +00:00
D. Berge
fd41d2a6fa Launch database housekeeping tasks from runner 2022-05-01 20:10:27 +02:00
D. Berge
39690c991b Update database templates.
* Add index on public.real_time_inputs.meta->>'tstamp'
* Add public.geometry_from_tstamp()
* Add augment_event_data()
2022-05-01 19:47:16 +02:00
D. Berge
09ead4878f Add database upgrade file 17 2022-05-01 19:46:04 +02:00
D. Berge
588d210f24 Fix reporting for “gun pressures” QC test.
Fixes #205.
2022-04-30 17:37:38 +02:00
D. Berge
28be86e7ff Graphs view: delay “no sequences” message until loaded.
Related to #196.
2022-04-30 16:14:32 +02:00
D. Berge
1eac97cbd0 Change “No fire” QC definition 2022-04-30 16:13:12 +02:00
D. Berge
e3a3bdb153 Clean up whitespace.
Commands used:

find . -type f -name '*.js'| while read FILE; do if echo $FILE |grep -qv node_modules; then sed -ri 's/^\s+$//' "$FILE"; fi; done
find . -type f -name '*.vue'| while read FILE; do if echo $FILE |grep -qv node_modules; then sed -ri 's/^\s+$//' "$FILE"; fi; done
find . -type f -name '*.py'| while read FILE; do if echo $FILE |grep -qv node_modules; then sed -ri 's/^\s+$//' "$FILE"; fi; done
2022-04-29 14:48:21 +02:00
D. Berge
0e534b583c Do not assume that lines have remarks.
Fixes #202.
2022-04-29 14:32:46 +02:00
D. Berge
51480e52ef Recognise "dark", "light" label view attributes.
In a label definition (in etc/surveys/*.yaml) we can now have
"dark" or "light" attributes under "view" to force the label
text to always use either the dark or light theme. This is
useful when a label's colour causes a bad contrast in either
theme.

Example:

  labels:
      Daily:
          view:
              colour: "#EFEBE9"
              description: "Of interest in the daily report"
              light: true # Text always displayed in a dark colour
          model:
              user: true
              multiple: true
2022-04-29 12:18:09 +02:00
D. Berge
187807cfb1 Enable Save button as soon as the remarks are changed.
Closes #199.
2022-04-27 19:45:26 +02:00
D. Berge
d386b97e42 Database upgrade 16: fix event edits.
Fixes #198.
2022-04-27 17:41:53 +02:00
D. Berge
da578d2e50 Fix project_summary view returning unwanted rows.
Fixes #197.
2022-04-27 10:49:46 +02:00
D. Berge
7cf89d48dd Fix whitespace 2022-04-26 17:41:48 +02:00
D. Berge
c0ec8298fa Don't try to show QC graphs on a new project.
If there are no sequences, just show a message to the effect.

Fixes #196.
2022-04-26 17:39:59 +02:00
D. Berge
68322ef562 Fix misleading comment.
Use an EPSG code that is actually in the work area of the Dougal boats.
2022-04-26 17:36:48 +02:00
D. Berge
888228c9a2 Do not crash if a project doesn't have QCs defined.
Fixes #195.
2022-04-26 14:50:34 +02:00
D. Berge
74d6f0b9a0 Accept mime query parameter 2022-04-16 17:18:04 +02:00
D. Berge
cf475ce2df Adapt middleware to new database schema.
As introduced by commit 0c6567d8f8.
2022-04-16 17:18:04 +02:00
D. Berge
26033b2a37 Fix syntax error.
Introduced by commit ead938b40f.
2022-04-13 09:04:52 +02:00
D. Berge
fafd4928d9 Fix Marked call (adapt to new Marked version) 2022-04-13 08:18:21 +02:00
D. Berge
ec38fdb290 Pin package sass version to avoid annoying warning 2022-03-18 20:07:50 +01:00
D. Berge
086172c5e7 Upgrade dependencies.
This is a conservative upgrade.

The upgraded version of leaflet-arrowheads uses optional chaining which
seems to cause webpack to choke, so added to "transpileDependencies" in
vue.config.js.

Closes #189.
2022-03-18 16:29:50 +01:00
D. Berge
3db453a271 Add keys to v-for loops 2022-03-18 16:15:06 +01:00
D. Berge
a5db9c984b Show sequence comments in log page 2022-03-18 15:05:08 +01:00
D. Berge
ead938b40f Inhibit exports.
They don't seem to be used, and for backups it's better to
just back up the whole database instead, which is being done
remotely.
2022-03-18 13:32:43 +01:00
D. Berge
634a7be3f1 Merge branch '184-refactor-qcs' into devel 2022-03-17 20:12:15 +01:00
D. Berge
913606e7f1 Allow forcing QCs.
QCs may be re-run for specific sequences or for a whole
project by defining an environment variable, as follows:

For an entire project:

* DOUGAL_FORCE_QC="project-id"

For specific sequences:

* DOUGAL_FORCE_QC="project-id sequence1 sequence2 … sequenceN"
2022-03-17 20:10:26 +01:00
D. Berge
49b7747ded Remove *all* QC events when saving sequence results.
When saving shot-by-shot results for a sequence,
*all* existing QC events for that sequence will be
removed first.

We do this because otherwise we may end up with QC
data for shots that no longer exist. Also, in the
case that we have QCed based on raw data, QC results
for shots which are not in the final data would stay
around even though those shots are no longer valid.
2022-03-17 20:07:11 +01:00
D. Berge
1fd265cc74 Update dependencies 2022-03-17 20:05:07 +01:00
D. Berge
13389706a9 Merge branch '184-refactor-qcs' into devel 2022-03-17 18:43:38 +01:00
D. Berge
818cd8b070 Add pg-cursor dependency, needed by QCs 2022-03-17 18:43:12 +01:00
D. Berge
a3d3c7aea7 Merge branch '184-refactor-qcs' into devel 2022-03-17 18:37:14 +01:00
D. Berge
a592ab5f6c Use digests rather than timestamps for QC execution.
Using timestamps does not work as we might be
importing files with timestamps older than the
last QC run. Those would not be detected by a
timestamp based method but would be by this
digest based approach.

There is a project-wide digest and per sequence
digests. The former takes the path and hashes of
all files known to Dougal for this project (the
`files` table), concatenantes them and computes
the MD5 checksum. Sequence digests do the same
but only including the files related to that
sequence.
2022-03-17 18:32:09 +01:00
D. Berge
9b571ce34d Merge branch '138-keep-edit-history-of-event-log-entries' into devel 2022-03-16 21:31:38 +01:00
D. Berge
aa2b158088 Remove spurious actions from DB template 2022-03-16 21:30:32 +01:00
D. Berge
0d1f2b207c Apply changes from 38e4e705a4 to DB schema template 2022-03-16 21:29:53 +01:00
D. Berge
38e4e705a4 Modify database upgrade file 12.
Two function that were dependent on the `events` view were
changed to work with `event_log` instead.
2022-03-16 21:08:42 +01:00
D. Berge
82d7036860 Merge branch '138-keep-edit-history-of-event-log-entries' into 'devel'
Resolve "Keep edit history of event log entries"

Closes #78, #101, #138, #141, #170, #172, and #181

See merge request wgp/dougal/software!20
2022-03-15 13:25:43 +00:00
D. Berge
0727e7db69 Update database templates to schema v0.3.1 2022-03-15 14:17:28 +01:00
D. Berge
2484b1c473 Merge branch '188-adapt-qc-results-view-to-new-api-endpoints' into 138-keep-edit-history-of-event-log-entries 2022-03-09 21:37:27 +01:00
D. Berge
750beb5c02 Add explicit indication of all tests passed 2022-03-09 21:36:49 +01:00
D. Berge
cd2e7bbd0f Merge branch '184-refactor-qcs' into 138-keep-edit-history-of-event-log-entries 2022-03-09 21:26:40 +01:00
D. Berge
21d5383882 Update QC check definitions 2022-03-09 21:25:47 +01:00
D. Berge
2ec484da41 Fix detection of sequence modification time 2022-03-09 21:25:04 +01:00
D. Berge
648ce9970f Interpolate timestamps for non-existing shotpoints 2022-03-09 21:22:33 +01:00
D. Berge
fd278a5ee6 Add database function: tstamp_interpolate 2022-03-09 21:21:48 +01:00
D. Berge
4f5cce33fc Add comments to database functions 2022-03-09 21:21:01 +01:00
D. Berge
53bb75a2c1 Add new database upgrade file 11.
Some of the things in new upgrade file 12 depend
on the functions defined here.
2022-03-09 19:07:58 +01:00
D. Berge
45595bd64f Rename database upgrades 11‒13 → 12‒14 2022-03-09 19:07:58 +01:00
D. Berge
af4d141c6a Merge branch '184-refactor-qcs' into '138-keep-edit-history-of-event-log-entries'
Resolve "Refactor QCs"

See merge request wgp/dougal/software!22
2022-03-09 17:46:20 +00:00
D. Berge
bef2be10d2 Merge branch '188-adapt-qc-results-view-to-new-api-endpoints' into '184-refactor-qcs'
Resolve "Adapt QC results view to new API endpoints"

See merge request wgp/dougal/software!24
2022-03-09 16:56:35 +00:00
D. Berge
803a08a736 Merge branch '187-create-qc-results-api-endpoints' into '184-refactor-qcs'
Resolve "Create QC results API endpoints"

See merge request wgp/dougal/software!23
2022-03-09 16:55:57 +00:00
D. Berge
c86cbdc493 Refactor QC view to use new API endpoint.
This provides essentially the same user experience as the old
endpoint, with one exception as of this commit:

* The user is not able to “accept” or “unaccept” QC events.
2022-03-09 17:50:55 +01:00
D. Berge
186615d988 Add comments for ease of browsing 2022-03-09 17:43:51 +01:00
D. Berge
666f91de18 Add QC results API endpoint 2022-03-09 17:43:10 +01:00
D. Berge
c8ce786e39 Add API middleware for returning QC results 2022-03-09 17:41:27 +01:00
D. Berge
73cb26551b Add library functions for getting QC results from DB.
We return the QC definitions tree structure, augmented with
a `sequences` attribute which contains `raw_lines` tuples
which are in turn augmented with a `shots` attribute
containing `event_log` tuples. The whole structure looks
something like:

qc_test:
  qc_test:
    sequences:
      - sequence0:
          shots: [sp0, sp1, …]
      - sequence1:
          shots: [sp0, sp1, …]
  qc_test:
    sequences:
      - sequence0:
          shots: [sp0, sp1, …]
  …
2022-03-09 17:35:12 +01:00
D. Berge
d90acb1aeb Add utility to convert QC definitions tree into a flat list 2022-03-09 17:32:23 +01:00
D. Berge
14a2f57c8d Refactor QC execution and results saving.
The results are now saved as follows:

For shot QCs, failing tests result in an event being created in
the event_log table. The text of the event is the QC result message,
while the labels are as set in the QC definition. It is conventionally
expected that these include a `QC` label. The event `meta` contains a
`qc_id` attribute with the ID of the failing QC.

For sequences, failing tests result in a `meta` entry under `qc`, with
the QC ID as the key and the result message as the value.

Finally, the project's `info` table still has a `qc` key, but unlike
with the old code, which stored all the QC results in a huge object
under this key, now only the timestamp of the last time a QC was run on
this project is stored, as `{ "updatedOn": timestamp }`.

The QCs are launched by calling the main() function in /lib/qc/index.js.
This function will first check the timestamp of the files imported into
the project and only run QCs if any of the file timestamps are later
than `info.qc.updatedOn`. Likewise, for each sequence, the timestamp of
the files conforming that sequence is checked against
`info.qc.updatedOn` and only those which are newer are actually
processed. This cuts down the running time very considerably.

The logic now is much easier on memory too, as it doesn't load the
whole project at once into memory. Instead, shotpoint QCs are processed
first, and for this a cursor is used, fetching one shotpoint at a
time. Then the sequence QCs are run, also one sequence at a time
(fetched via an individual query touching the `sequences_summary` view,
rather than via a cursor; we reuse some of the lib/db functions here),
for each sequence all its shotpoints and a list of missing shots are
also fetched (via lib/db function reuse) and passed to the QC functions
as predefined variables.

The logic of the QC functions is also changed. Now they can return:

* If a QC passes, the function MUST return boolean `true`.

* If a QC fails, the function MAY return a string describing the nature
  of the failure, or in the case of an `iterate: sequence` type test,
  it may return an object with these attributes:

  - `remarks`: a string describing the nature of the failure;
  - `labels`: a set of labels to associate with this failure;
  - `shots`: a object in which each attribute denotes a shotpoint number
    and the value consists of either a string or an object with
`remarks` (string), `labels` (array of strings) attributes. This allows
us to add detail about which shotpoints exactly contribute to cause a
sequence-wide test failure (this may not be applicable to every
sequence-wide QC) and it's also a handy way to detect and insert events
for missing shots.

* For QCs which may give false positives, such as missing gun data, a
  new QC definition attribute is introduced: if `ignoreAllFailed` is
boolean `true` and all shots fail the test for a sequence, or all
sequences fail the test for a prospect, the results of the QC will be
ignored, as if the test had passed. This is mostly to deal with gun or
any other data that may be temporarily missing.
2022-03-07 21:41:10 +01:00
D. Berge
67f8b9c6dd Bypass permissions check on info.put() if role is null.
The comparison is strict non-equality so a null role cannot
be forced via the API.

The need for this is so that we can reuse this function to
save QC results, which is something that does not take
place over the API.
2022-03-07 21:20:21 +01:00
D. Berge
d3336c6cf7 Add fetchRow DB function.
Helper function to fetch a row at a time using a cursor.
2022-03-07 21:16:43 +01:00
D. Berge
17bb88faf4 Cope with P1/11s with no S records 2022-03-07 21:08:22 +01:00
D. Berge
a52c7e91f5 Document in runner.sh how to run ASAQC in test mode 2022-03-07 21:07:20 +01:00
D. Berge
8debe60d5c Cope with undefined labels 2022-03-02 19:39:29 +01:00
D. Berge
ee9a33513a Update database README 2022-02-28 21:27:20 +01:00
D. Berge
723c9cc166 Make it possible to repeatedly apply DB upgrade 11.
Even though this makes PostgreSQL 14 a hard dependency.
2022-02-28 21:26:19 +01:00
D. Berge
cb952d37f7 Fix: do not require file that no longer exists 2022-02-28 21:25:00 +01:00
D. Berge
d5fc04795d Make rows dense.
This should probably be turned into an option controlled by the
user.
2022-02-27 19:59:06 +01:00
D. Berge
4e0737335f Add row context menu.
It replaces the `Actions` column in the old table and provides
more actions.

The user can now edit not just the comments and labels but also
the timestamp / shotpoint as requested in #78 (closes #78).

Because events are grouped by timestamp / shotpoint (each row
represents a unique timestamp or shotpoint), the behaviour is
slightly different depending on whether the user clicks on a
row containing a single (editable) event, or on one of multiple
editable events in the same row. Also, rows containing only
read-only events are recognised and no edition actions are
provided for those.
2022-02-27 19:59:06 +01:00
D. Berge
d47c8a9e10 Add (disabled) active row highlighter.
It implements the same functionality as in other tabs
such as sequences, lines, etc., but it is disabled here
because in my opinion it doesn't look too nice.

It will probably be a matter of enabling it at some point
and asking for feedback on user preference.
2022-02-27 19:56:21 +01:00
D. Berge
7ea0105d9f Add popularLabels computed property.
Returns a list of labels used in the current view,
in order of popularity (most used first).

NOTE: this property is not actually used. It's
technically dead code.
2022-02-27 19:56:21 +01:00
D. Berge
8f4bda011b Add dialogue to edit event labels.
This assumes that adding or removing labels is a relatively
common action to do on an event and provides a quicker
and simpler mechanism than bringing up the full event
dialogue.

This is meant to be invoked from a context menu action or
similar.
2022-02-27 19:56:21 +01:00
D. Berge
48505dbaeb View event history.
When an event has been modified, this control opens a dialogue
where the previous version of the event may be reviewed and if
necessary restored.

Technically, this was the quid of and closes #138.
2022-02-27 19:56:21 +01:00
D. Berge
278c46f975 Adapt events view to new schema 2022-02-27 19:56:21 +01:00
D. Berge
180343754a Remove old event edit dialogue 2022-02-27 19:56:21 +01:00
D. Berge
9aa9ce979b Replace event edit dialogue.
The old <dougal-event-edit-dialog/> gets replaced by
<dougal-event-edit/> which handles the new events schema.
2022-02-27 19:56:21 +01:00
D. Berge
1e5be9c655 Add new event edit dialogue.
Replaces <dougal-event-edit-dialog/>.
2022-02-27 19:56:21 +01:00
D. Berge
0be5dba2b9 Return also labels from <dougal-context-menu/>.
Keeping in mind that the input model is a tree and labels
may be at any level in the tree, not just in the leaves.
2022-02-27 19:56:21 +01:00
D. Berge
0c91e40817 Fix <dougal-context-menu/> default prop value 2022-02-27 19:56:21 +01:00
D. Berge
c1440c7ac8 Simplifiy <dougal-context-menu/> model 2022-02-27 19:56:21 +01:00
D. Berge
606f18c016 Add Vuex position and timestamp getters for real-time event 2022-02-27 19:56:21 +01:00
D. Berge
febf109cce Update API description 2022-02-27 19:56:21 +01:00
D. Berge
9b700ffb46 Update required database schema 2022-02-27 19:56:21 +01:00
D. Berge
9aca927e49 Update version checking mechanism.
Checks both database schema and API versions.
2022-02-27 19:56:21 +01:00
D. Berge
adaa1a6b8a Add version number to API 2022-02-27 19:56:21 +01:00
D. Berge
8790a797d9 Allow restricting by timestamp or position.
Closes #181.
2022-02-27 19:56:21 +01:00
D. Berge
d7d75f34cd Remove event caching.
That was a horrible kludge and should not be necessary with the
new schema, which is simpler and much faster.
2022-02-27 19:56:21 +01:00
D. Berge
950582a5c6 Refactor event middleware and db code to use new tables 2022-02-27 19:56:21 +01:00
D. Berge
d0da1b005b Add replaceMarkers utility function 2022-02-27 19:56:21 +01:00
D. Berge
1e2c816ef3 Add database upgrade file 13.
Drops the old event tables.

NOTE: consider not applying this patch until confident that
the migration has proceeded smoothly. Dougal can operate just
fine without it.
2022-02-27 19:56:21 +01:00
D. Berge
54b457b4ea Add database upgrade file 12.
Migrates data from old event tables to new.
2022-02-27 19:56:21 +01:00
D. Berge
4d2efd1e04 Move sequence events middleware to a different path.
This is to make room for a new endpoint to retrieve
data for individual events.
2022-02-27 19:56:21 +01:00
D. Berge
920ea83ece Add API endpoint to retrieve a single shotpoint.
This will be used by the new event dialogue in the
frontend to get shotpoint information when creating
or editing events.
2022-02-27 19:56:21 +01:00
D. Berge
d33fe4e936 Add database utilities file.
Intended to contain reusable functions.
2022-02-27 19:56:21 +01:00
D. Berge
c347b873c5 Update database README.
Add information on restoring from backup and troubleshooting
details when migrating PostgreSQL versions.
2022-02-27 19:56:21 +01:00
D. Berge
0c6567d8f8 Add database upgrade file 11 2022-02-27 19:56:12 +01:00
D. Berge
195741a768 Merge branch '173-do-not-use-inodes-as-part-of-a-file-s-fingerprint' into 'devel'
Resolve "Do not use inodes as part of a file's fingerprint"

Closes #173

See merge request wgp/dougal/software!19
2022-02-07 16:08:04 +00:00
D. Berge
0ca44c3861 Add database upgrade file 10.
NOTE: this is the first time we modify the actual data
in the database, as opposed to adding to the schema.
2022-02-07 17:05:19 +01:00
D. Berge
53ed096e1b Modify file hashing function.
We remove the inode from the hash as it is unstable when the
files are on an SMB filesystem, and replace it with an MD5
of the absolute file path.
2022-02-07 17:03:10 +01:00
D. Berge
75f91a9553 Increment schema wanted version 2022-02-07 17:02:59 +01:00
D. Berge
40b07c9169 Merge branch '175-add-database-versioning-and-migration-mechanism' into 'devel'
Resolve "Add database versioning and migration mechanism"

Closes #175

See merge request wgp/dougal/software!18
2022-02-07 14:43:50 +00:00
D. Berge
36e7b1fe21 Add database upgrade file 09 2022-02-06 23:26:57 +01:00
D. Berge
e7fa74326d Add README to database upgrades directory 2022-02-06 23:24:24 +01:00
D. Berge
83be83e4bd Check database schema compatibility.
The server will not start unless it satisfies itself that we're
running against a compatible database schema.
2022-02-06 22:52:45 +01:00
D. Berge
81ce6346b9 Add database schema information to package.json.
Used to determine if the actual schema on the database
is compatible with the version of the server we're
attempting to run.
2022-02-06 22:51:25 +01:00
D. Berge
923ff1acea Add more details to package.json 2022-02-06 22:50:44 +01:00
D. Berge
8ec479805a Add version reporting library.
This reports the current server version, from Git by
default.

Also, and of more interest, it reports whether the
current database schema is compatible with the
server code.
2022-02-06 22:48:20 +01:00
D. Berge
f10103d396 Enfore info key access restrictions on the API.
Obviously, those keys can be edited freely at the database
level. This is intended.
2022-02-06 22:40:53 +01:00
D. Berge
774bde7c00 Reserve certain keys on info tables 2022-02-06 22:39:11 +01:00
D. Berge
b4569c14df Update database README.
Document how to create a Dougal database from scratch
and how to update PostgreSQL.
2022-02-06 22:28:21 +01:00
D. Berge
54eea62e4a Fix require path 2022-02-06 14:24:25 +01:00
D. Berge
69c4f2dd9e Merge branch '161-transfer-files-to-asaqc' into 'devel'
Resolve "Transfer files to ASAQC"

Closes #161

See merge request wgp/dougal/software!16
2021-10-09 09:23:54 +00:00
D. Berge
acc829b978 Switch to production URL in ASAQC configuration 2021-10-06 04:16:17 +02:00
D. Berge
ff4913c0a5 Instrument getLineName to monitor probable cause of #165 2021-10-06 02:12:05 +02:00
D. Berge
51452c978a Add ASAQC task to runner 2021-10-04 21:26:13 +02:00
D. Berge
927ef71ecc Send Ocp-Apim-Subscription-Key with ASAQC requests 2021-10-04 21:00:41 +02:00
D. Berge
14541bcb95 Make code compatible with NodeJS 14 2021-10-04 16:52:04 +02:00
D. Berge
5c190e5554 Add ASAQC queue processor.
This code implements the backend processing side
of the ASAQC queue, i.e., the bit that communicates
with the remote API.

Its expected use it to have it running at regular
intervals, e.g., via cron. The entry point is:

lib/www/server/queues/asaqc/index.js

That file is executable and can be run directly
from the shell or within a script. Read the comments
in that file for further instructions.
2021-10-04 02:21:00 +02:00
D. Berge
0f447fc27d Add ASAQC API mock-up.
To be used for testing and debugging. See
index.js for instructions.
2021-10-04 02:21:00 +02:00
D. Berge
dfbccf3bc6 Add ASAQC (test) server details to configuration.
The URL corresponds to that of a built-in test server.

Note that the /etc/ssl directory is protected against
accidental inclusion into the repository by commit
458b6837. The TLS private key should *never* be
committed.
2021-10-04 02:21:00 +02:00
D. Berge
a491530018 Add ASAQC transfer support to client (sequence list) 2021-10-04 02:21:00 +02:00
D. Berge
c7784aa52f Add ASAQC queue endpoints to API 2021-10-04 02:21:00 +02:00
D. Berge
0533314b01 Add DOUGAL_ROOT property to configuration object 2021-10-04 02:21:00 +02:00
D. Berge
8da664a025 Add directory for TLS certificates.
And add it to .gitignore so its contents do not get committed
by accident.
2021-10-04 02:21:00 +02:00
D. Berge
6debf5c355 Add queue-related functions to the database interface.
These functions, in general following the same HTTP-verb
approach as the rest of the database interface, are for
use with both the HTTP API and the queue processor.
2021-10-04 02:21:00 +02:00
D. Berge
db8efce346 Remove dead code 2021-10-04 02:21:00 +02:00
D. Berge
b107c71c6f Add option to get only summary info for a sequence.
Which is faster when we don't need the shotpoint data.
2021-10-04 02:21:00 +02:00
D. Berge
ef12168811 Make it possible to list one specific sequence 2021-10-04 02:21:00 +02:00
D. Berge
e1dc970db4 Add export functions for SeisJSON data.
These functions abstract the creation of SeisJSON payloads
and their various representations as GeoJSON, HTML or PDF.
2021-10-04 02:21:00 +02:00
D. Berge
f2de8509cc Make Babel support logical assignment operators.
That's ||=, &&=, ^^=, and the like.
2021-10-04 02:21:00 +02:00
D. Berge
1e6c6ef961 Add throttle() helper.
Useful to avoid repeated updates triggered by
incoming row-level database events.
2021-10-04 02:21:00 +02:00
D. Berge
38e56394d4 Add queue_items to the list of DB events to listen for 2021-10-04 02:21:00 +02:00
D. Berge
374fb7de67 Add database upgrade file 08 2021-10-04 02:21:00 +02:00
D. Berge
978256ceab Describe ASAQC-related API endpoints 2021-10-04 02:21:00 +02:00
D. Berge
5a7fe9b38a Update API version description 2021-10-04 02:21:00 +02:00
D. Berge
83c992c0d9 Fix description of endpoints authorisation 2021-10-04 02:21:00 +02:00
D. Berge
18ee28d72e Describe HTTP 401 responses explicitly 2021-10-04 02:21:00 +02:00
D. Berge
6bc3aff587 Change server names in API description 2021-10-04 02:21:00 +02:00
D. Berge
74b3de5c26 Merge branch '75-quality-control-dashboard' into 'devel'
Resolve "Quality control dashboard" – sequence visualisations

Closes #143, #142, and #150

See merge request wgp/dougal/software!14
2021-10-01 21:17:17 +00:00
D. Berge
57a08c93bc Add link to graphics tab from sequence list 2021-09-28 22:16:12 +02:00
D. Berge
fabc9fe757 Do not make graphs editable 2021-09-28 18:30:26 +02:00
D. Berge
6f32f24481 Add configuration dialog to Graphs.
Lets the user choose which aspects (graphs) he wants to
be visible.
2021-09-28 18:17:38 +02:00
D. Berge
dffe7defbb Add tooltips to Graphs toolbar 2021-09-28 18:16:57 +02:00
D. Berge
b9844528f1 Add graphBar to resizeObserver.
This ensures that it is always the right size when it first
gets displayed.
2021-09-28 18:15:19 +02:00
D. Berge
cd78dbd0d8 Fix typos in resizeObserver 2021-09-28 18:14:39 +02:00
D. Berge
798203be9f Add preferences support to DougalGraphGunsPressure 2021-09-28 18:12:37 +02:00
D. Berge
5bfd7dc835 Add preferences support to DougalGraphGunsDepth 2021-09-28 18:11:43 +02:00
D. Berge
c17862fbbb Add preferences support to DougalGraphGunsTiming 2021-09-28 18:11:04 +02:00
D. Berge
04c0369923 Add preferences support to DougalGraphArraysIJScatter 2021-09-28 18:10:08 +02:00
D. Berge
026cfb6f98 Rename GraphArraysIJScatter to DougalGraphArraysIJScatter 2021-09-28 18:08:48 +02:00
D. Berge
a4e6ec0712 Add support for personalising QC graph settings.
Preferences are read from the store and passed to graph components
via the `settings` prop. Component may changed their own settings
by emitting the `update:settings` signal.
2021-09-28 17:59:32 +02:00
D. Berge
b3e052cb12 Add utility function to filter preferences by a prefix 2021-09-28 17:53:07 +02:00
D. Berge
cf88ecf172 Save user preferences to Vuex store.
The user preferences are saved in the browser's localStorage and
read by setCredentials() whenever that function is called. From
that point they are cached in the Vuex store.

Provided that preferences are only modified through the store,
via the saveUserPreference() call, the preferences should always
be in sync between the store and the browser.

The preferences object is a key/value store. Each key is
expected to be in the form of a series of dot-separated prefixes,
e.g., `UserX.RoleY.Graphs.GraphType1.setting0`.

For user preferences, the first two prefix elements should be the
username and role of the user that the setting applies to. These will
be automatically added and stripped by saveUserPreference() and
loadUserPreferences() respectively.
2021-09-28 17:42:49 +02:00
D. Berge
e267440711 Move comment to right place 2021-09-28 17:30:48 +02:00
D. Berge
454094b187 Refactor gun heatmaps component.
Fixes #150.

Contributes towards the goal of #149. As irrelevant data (such
as for non-firing guns) is no longer shown at all. This affects:

* Firetime (only active array data shown)
* Gun deltas (only active array shown)
* Fill time (only non-active array shown)
2021-09-21 00:32:00 +02:00
D. Berge
862e754a6f Fix labelling of gun mode and detect heatmaps.
Fixes #142.
2021-09-20 00:18:31 +02:00
D. Berge
894877750e Make heatmap hover box more informative.
Closes #143.
2021-09-20 00:17:35 +02:00
D. Berge
09b45d5d65 Swap outlier colours 2021-09-11 21:30:12 +02:00
D. Berge
1352c3b312 Make graph colours consistent for port / starboard elements 2021-09-11 19:19:58 +02:00
D. Berge
30aa2c302e Add graphic aesthetics 2021-09-11 12:38:12 +02:00
D. Berge
3eaa2757b9 Add Graphs tab to navigation bar 2021-09-11 12:19:06 +02:00
D. Berge
6f6af1bbc7 Add graphs/ route to client 2021-09-11 12:19:06 +02:00
D. Berge
019561229c Add Graph component.
It displays a series of data plots.
2021-09-11 12:19:06 +02:00
D. Berge
e212dc8b92 Add unpack helper function to frontend.
Convenience function to extract a key from an
array of objects.
2021-09-11 12:19:06 +02:00
D. Berge
5c00013892 Add graphic library dependencies 2021-09-11 12:19:06 +02:00
D. Berge
1e5bdcc068 Add Vuex functions to load / save user preferences 2021-09-11 12:19:06 +02:00
D. Berge
a280a910f5 Add database upgrade file 07 2021-09-11 12:19:06 +02:00
D. Berge
45fe467a21 Implement sequence/get API endpoint.
It returns data for all individual points in a sequence.
2021-09-11 12:19:06 +02:00
D. Berge
8d3b7adc78 Show azimuths to two decimals in SeisJSON exports 2021-09-04 23:34:53 +02:00
D. Berge
079d3a18b0 Merge branch '131-show-missing-shots-in-sequence-reports' into 'devel'
Resolve "Show missing shots in sequence reports"

Closes #131

See merge request wgp/dougal/software!15
2021-09-04 21:32:44 +00:00
D. Berge
f0b1fc2fe6 Show missed shot events in HTML, PDF exports 2021-09-04 23:29:58 +02:00
D. Berge
987bdf6e21 Add option to export missing shots as SeisJSON events 2021-09-04 23:28:43 +02:00
D. Berge
1d3507b3a4 Export missing shots by default.
Unless explicitly requested by the user by setting the
option `missing` to `false`, a list of missing shotpoints
will be included in the SeisJSON file.
2021-09-04 23:19:25 +02:00
D. Berge
a82fc7bc8a Recover from feed XML parsing error 2021-09-04 02:43:58 +02:00
D. Berge
29b3c9a250 Show azimuth to two decimals elsewhere too.
Related to #126, might as well use two decimals throughout.
2021-09-02 01:18:47 +02:00
D. Berge
040c1ead96 Show azimuth to two decimal places.
In planner report template.

Closes #126.
2021-09-02 01:17:40 +02:00
D. Berge
1c7bed0c15 Fix returning next planned sequence number.
If no sequences have been shot, return 1 instead of null as the
next available sequence number.

Fixes #125.
2021-09-02 01:04:38 +02:00
D. Berge
dfcda1b2d9 Merge branch '103-24-hour-lookahead-planning-report' into 'devel'
Resolve "24-hour lookahead planning report"

Closes #103

See merge request wgp/dougal/software!13
2021-06-21 14:53:35 +00:00
D. Berge
b3aadfc33c Merge branch '60-update-planner-as-sequences-are-shot' into 'devel'
Resolve "Update planner as sequences are shot"

Closes #60

See merge request wgp/dougal/software!12
2021-06-21 14:52:11 +00:00
D. Berge
d5980d9154 Add CSV planner output option 2021-06-19 19:04:05 +02:00
D. Berge
b5f2945c8b Fix end time in plan HTML template 2021-06-19 15:43:04 +02:00
D. Berge
9bbffe2ae0 React to changes in planner remarks 2021-06-19 12:27:36 +02:00
D. Berge
09f60d6c18 Add database upgrade file 06 2021-06-19 12:23:25 +02:00
D. Berge
81d9ea19cc Add adjust_planner() function to DB schema.
It updates the planned lines details according to production and current
time.
2021-06-19 12:18:28 +02:00
D. Berge
497d4d68f9 Call notify on changes to schema's info table 2021-06-19 12:17:26 +02:00
D. Berge
853deca3c3 Rename misnamed trigger 2021-06-19 12:16:37 +02:00
D. Berge
99f1530db3 Replace phone icon in template.
Strangely enough, the emoji icon seems to work reliably across
platforms.
2021-05-31 02:54:38 +02:00
D. Berge
b325ae3452 Let the user know when there are no planner comments 2021-05-31 02:47:20 +02:00
D. Berge
f97d334fe5 Improve the aesthetics of the planner remarks section 2021-05-31 02:41:58 +02:00
D. Berge
cb114f01cd Add GUI support for downloading planner data.
Including HTML and PDF formats, which constitutes the lookahead report.
2021-05-31 02:29:50 +02:00
D. Berge
707df76b70 Add GUI support for saving planner remarks.
They get saved to `/project/:project/info/plan/remarks`.
2021-05-31 02:29:50 +02:00
D. Berge
bba050032f Add POST, PUT, DELETE support to /project/:project/info.
It reuses the same backend functions as for the global `/info/` path.
2021-05-31 02:29:50 +02:00
D. Berge
594233c965 Add HTML & PDF planner output options.
Coupled with a suitable Nunjucks template, this is effectively the
24-hour (or whatever period of time) lookahead.
2021-05-31 02:29:50 +02:00
D. Berge
5795c1f87d Add server-side map rendering component.
Based on our own fork of leaflet-headless.
2021-05-31 02:29:50 +02:00
D. Berge
ccd1852f65 Add Nunjucks rendered get filter.
Given an argument consisting of an array of objects and an attribute
name `attr`, it returns an array of all `attr` attributes.
2021-05-31 02:29:50 +02:00
D. Berge
17947df168 Modify Nunjucks rendered timestamp function.
* It accepts a `precision` parameter which truncates the timestamp to a
give precision. Can be `seconds`, `minutes`, `hours` or `days` / `date`.

* It tries to be more flexible in what it accepts as input.

* It accepts an input of "now" which returns the current timestamp. Can
  be used along with `precision`.
2021-05-31 02:29:50 +02:00
D. Berge
041878096d Accept a mime query parameter to force MIME type 2021-05-31 02:29:50 +02:00
D. Berge
ea3e31058f Refactor the planned lines editing logic.
We move most of the logic from the client (as it was until now) to the
server.

The PATCH command maintains the same format but it should provide only
one of the following keys per request:

* ts0
* ts1
* speed
* fsp
* lsp
* lagAfter
* sequence

   Earlier keys in the list above take priority over latter ones.

The following keys may be provided by themselves or in combination with
each other (but not with any of the above):

* name
* remarks
* meta

As a special case, an empty string as the `name` value causes the name
to be auto-generated.

See comments in the code `patch.js` for details on the update logic.
2021-05-28 20:30:59 +02:00
D. Berge
534a54ef75 Add database upgrade file 05 2021-05-28 20:30:59 +02:00
D. Berge
f314536daf Change planned_lines trigger from statement to row.
Because a) it tells us what has changed and b) doesn't fire if we
didn't actually change anything.
2021-05-28 20:30:59 +02:00
D. Berge
de4aa52417 Make planned_lines primary key deferrable.
Helps when we need to renumber sequences.
2021-05-28 20:30:59 +02:00
D. Berge
758b13b189 Add saillines layer to map 2021-05-28 20:30:29 +02:00
D. Berge
967db1dec6 Include NTBA status in preplot GIS output 2021-05-28 20:29:57 +02:00
D. Berge
91fd5e4559 Ensure that timestamp is always a Date object 2021-05-27 17:50:01 +02:00
D. Berge
cf171628cd Fix error in editing of planned line start time 2021-05-27 17:49:32 +02:00
D. Berge
94c29f4723 Change the sunset / sunrise times reported via the tooltip.
The icon still uses the lower edge of the sun to calculate day / night,
but the tooltip shows actual sunrise and sunset times.
2021-05-27 02:08:30 +02:00
D. Berge
14b2e55a2e Remove edit controls from planner for read-only users.
Left over from #108.
2021-05-27 01:32:03 +02:00
D. Berge
c30e54a515 Round vessel speeds to 0.1 kt 2021-05-27 01:09:28 +02:00
D. Berge
7ead826677 Show sunrise / sunset times in the planner.
* A ‘sun’ icon is shown when a line starts and ends in daytime
* A ‘moon’ icon is shown when a line starts and ends in nighttime
* A ‘sun/moon’ icon is shown in other cases

Sunrise and sunset times are provided as a tooltip when hovering over
the icon.

Closes #72.
2021-05-27 01:02:42 +02:00
D. Berge
7aecb514db Clear QC metadata when importing gun data.
Fixes #118.
2021-05-26 00:30:58 +02:00
D. Berge
ad395aa6e4 Include the planned lines table in system dumps 2021-05-26 00:15:09 +02:00
D. Berge
523ec937dd Always merge metadata on import.
The INSERT INTO raw_lines / final_lines will not always be executed as
the lines may already exist (particularly in raw_lines because of
*online*), so whether it worked or not we merge the metadata immediately
afterwards (this may cause an extra notification to be fired).
2021-05-25 03:19:42 +02:00
D. Berge
9d2ccd75dd Do not try to use line name if there isn't one 2021-05-25 03:19:00 +02:00
D. Berge
3985a6226b Suggest ${lineName}-NavLog.${extension} as file name.
This is for the usual case where only one sequence is requested.

When more than one sequence is requested, the suggested name comes out
as ${projectId}-${sequenceList}.${extension}, where `sequenceList` is
the list of sequence numbers separated by semicolons, e.g.:
eq21203-37;38;39.html.

Closes #116.
2021-05-25 02:23:41 +02:00
D. Berge
7d354ffdb6 Add database upgrade file 2021-05-25 02:21:11 +02:00
D. Berge
3d70a460ac Output raw and final lines metadata in summary views 2021-05-25 02:13:50 +02:00
D. Berge
caae656aae Fix event detection failure.
There was a typo in the channel detection logic, resulting
in bogus events full of `undefined` data values.

Fixes #115.
2021-05-24 18:30:53 +02:00
D. Berge
5708ed1a11 Merge branch '57-make-event-log-entries-for-start-and-end-of-line-upon-import-of-final-sequence-if-the-entries-do' into 'devel'
Resolve "Make event log entries for start and end of line upon import of final sequence, if the entries do not already exist"

Closes #57

See merge request wgp/dougal/software!11
2021-05-24 15:44:58 +00:00
D. Berge
ad3998d4c6 Add database upgrade file 2021-05-24 17:41:11 +02:00
D. Berge
8638f42e6d Add database upgrade files.
These files contain the sequence of SQL commands needed to bring
a database or project schema up to date with the latest template
database or project schema.

These files must be applied manually. Check the comments at the top of
the file for instructions.
2021-05-24 17:39:01 +02:00
D. Berge
bc5aef5144 Run post-import functions after final lines.
The reason why need to do it like this instead of relying on a trigger
is because the entry in final_lines is created first and the final_shots
are populated. If we first the trigger on final_lines it is not going
to find any shots; if we fire it as a row trigger on final_shots it
would try to label every point in sequence as it is imported; finally if
we fire it as a statement trigger on final_shots we have no idea which
sequence was imported.
2021-05-24 16:59:56 +02:00
D. Berge
2b798c3ea3 Ignore attempts to put the same label twice on the same event 2021-05-24 16:59:20 +02:00
D. Berge
4d97784829 Upgrade database project schema template.
Adds:

* label_in_sequence (_sequence integer, _label text):
  Returns events containing the specified label.

* handle_final_line_events (_seq integer, _label text, _column text):
  - If _label does not exist in the events for sequence _seq:
    it adds a new _label label at the shotpoint obtained from
    final_lines_summary[_column].
  - If _label does exist (and hasn't been auto-added by this function
    in a previous run), it will add information about it to the final
    line's metadata.

* final_line_post_import (_seq integer):
  Calls handle_final_line_events() on the given sequence to check
  for FSP, FGSP, LGSP and LSP labels.

* events_seq_labels_single ():
  Trigger function to ensure that labels that have the attribute
  `model.multiple` set to `false` occur at most only once per
  sequence. If a new instance is added to a sequence, the previous
  instance is deleted.

* Trigger on events_seq_labels that calls events_seq_labels_single().

* Trigger on events_timed_labels that calls events_seq_labels_single().
2021-05-24 16:49:39 +02:00
D. Berge
13da38b4cd Make websocket notifications await.
Not sure if this helps much. It might help with avoiding
out of order notifications and reducing the rate at which
the clients get spammed when importing database dumps and
such, but that hasn't been tested.
2021-05-24 15:52:29 +02:00
D. Berge
5af89050fb Refactor SOL/EOL real-time detection handler.
This also implements a generic handler mechanism that can be
reused for other purposes, such as sending email / XMPP notifications,
doing real-time QC checks and so on.

Fixes #113.
2021-05-24 13:48:53 +02:00
D. Berge
d40ceb8343 Refactor list of notification channels into its own file 2021-05-24 13:38:19 +02:00
D. Berge
56d1279584 Allow api action to make arbitrary HTTP(S) requests.
If the URL is an absolute HTTP(S) one, we use it as-is.
2021-05-24 13:35:36 +02:00
D. Berge
d02edb4e76 Force the argument into String prior to splitting 2021-05-24 13:32:03 +02:00
D. Berge
9875ae86f3 Record P1/11 line name in database on import 2021-05-24 13:30:25 +02:00
D. Berge
53f71f7005 Set primary key on events_seq_labels in schema template 2021-05-23 22:27:00 +02:00
D. Berge
5de64e6b45 Add meta column to events view in schema template 2021-05-23 22:26:00 +02:00
D. Berge
67af85eca9 Recognise PENDING status in sequence imports.
If a final sequence file or directory name matches a pattern
which is recognised to indicate a ‘pending acceptance’ status,
the final data (if any exists) for that sequence will be deleted
and a comment added to the effect that the sequence has been
marked as ‘pending’.

To accept the sequence, rename its final file or directory name
accordingly.

Note: it is the *final* data that is searched for a matching
pattern, not the raw.

Closes #91.
2021-05-21 15:15:15 +02:00
D. Berge
779b28a331 Add info table to system dumps 2021-05-21 12:18:36 +02:00
D. Berge
b9a4d18ed9 Do not fail if no equipment has been defined.
Fixes #112.
2021-05-20 21:16:39 +02:00
D. Berge
0dc9ac2b3c Merge branch '71-add-equipment-info-to-the-logs' into 'devel'
Resolve "Add equipment info to the logs"

Closes #71

See merge request wgp/dougal/software!10
2021-05-20 19:05:35 +00:00
D. Berge
39d85a692b Use default Nunjucks template if necessary.
If the survey configuration does not itself have a template
we will use the one in etc/defaults/templates/sequence.html.njk.

It is not very likely that the template will be changed all that
often and it avoids issues when people forget to copy it across
to a new survey, etc.
2021-05-20 20:38:39 +02:00
D. Berge
e7661bfd1c Do not fail if requested object does not exist 2021-05-20 20:38:08 +02:00
D. Berge
1649de6c68 Update default sequence HTML template 2021-05-20 20:37:37 +02:00
D. Berge
1089d1fe75 Add equipment configuration fontend user interface 2021-05-20 18:35:56 +02:00
D. Berge
fc58a4d435 Implement equipment frontend component 2021-05-20 18:35:56 +02:00
D. Berge
c832d8b107 Commit default template for sequences 2021-05-20 18:35:56 +02:00
D. Berge
4a9e61be78 Add unique filter to Nunjucks renderer 2021-05-20 18:35:56 +02:00
D. Berge
8cfd1a7fc9 Export equipment info to Seis+JSON files 2021-05-20 18:35:56 +02:00
D. Berge
315733eec0 Refactor events export middleware.
Uses the `prepare` method for better reusability.
2021-05-20 18:35:56 +02:00
D. Berge
ad422abe94 Add prepare method for Seis+JSON and related exports.
It retrieves the data necessary for a complete Seis+JSON
export, including equipment info.
2021-05-20 18:35:56 +02:00
D. Berge
92210378e1 Listen for and broadcast info notifications 2021-05-20 18:21:01 +02:00
D. Berge
8d3e665206 Expose new API endpoint: /info/:path(*).
Provides CRUD access to values (which may be deeply nested) from the
global `info` table.
2021-05-20 18:19:29 +02:00
D. Berge
4ee65ef284 Implement info/delete middleware 2021-05-20 18:18:26 +02:00
D. Berge
d048a19066 Implement info/put middleware 2021-05-20 18:18:13 +02:00
D. Berge
97ed9bcce4 Implement info/post middleware 2021-05-20 18:17:52 +02:00
D. Berge
316117cb83 Implement info.delete() database method.
It deletes a (possibly deeply nested) element in the
`info` table.
2021-05-20 18:16:26 +02:00
D. Berge
1d38f6526b Implement info.put() database method.
Replaces an existing element with a new one, or inserts it
if there is nothing to replace. The element may be deeply
nested inside a JSON object or array in the `info` table.

Works for both public.info and survey_?.info.
2021-05-20 18:14:43 +02:00
D. Berge
6feb7d49ee Implement info.post() database method.
It adds an element to a JSON array corresponding to a
key in the info table. Errors out if the value is not
an array.
2021-05-20 18:13:15 +02:00
D. Berge
ac51f72180 Ignore empty path parts in info.get() 2021-05-20 18:10:51 +02:00
D. Berge
86d3323869 Remove logging statement 2021-05-20 18:10:27 +02:00
D. Berge
b181e4f424 Let the user set the search path to no survey.
This is so that we can access tables in the `public`
schema which are overloaded by survey tables, as is
the case with `info`.
2021-05-20 18:08:03 +02:00
D. Berge
7917eeeb0b Add table info to schema.
This one is independent of any projects so it goes
into `public`.
2021-05-20 18:07:05 +02:00
D. Berge
b18907fb05 Merge branch '53-mark-points-as-not-to-be-acquired-ntba' into 'devel'
Resolve "Mark points as ‘not to be acquired’ (NTBA)"

Closes #53

See merge request wgp/dougal/software!9
2021-05-17 18:34:46 +00:00
575 changed files with 478524 additions and 35608 deletions

3
.gitignore vendored
View File

@@ -10,3 +10,6 @@ lib/www/client/source/dist/
lib/www/client/dist/
etc/surveys/*.yaml
!etc/surveys/_*.yaml
etc/ssl/*
etc/config.yaml
var/*

27
bin/check_mounts_present.py Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/python3
"""
Check if any of the directories provided in the imports.mounts configuration
section are empty.
Returns 0 if all arguments are non-empty, 1 otherwise. It stops at the first
empty directory.
"""
import os
import configuration
cfg = configuration.read()
if cfg and "imports" in cfg and "mounts" in cfg["imports"]:
mounts = cfg["imports"]["mounts"]
for item in mounts:
with os.scandir(item) as contents:
if not any(contents):
exit(1)
else:
print("No mounts in configuration")
exit(0)

View File

@@ -1,4 +1,5 @@
import os
import pathlib
from glob import glob
from yaml import full_load as _load
@@ -11,6 +12,18 @@ surveys should be under $HOME/etc/surveys/*.yaml. In both cases,
$HOME is the home directory of the user running this script.
"""
def is_relative_to(it, other):
"""
is_relative_to() is not present version Python 3.9, so we
need this kludge to get Dougal to run on OpenSUSE 15.4
"""
if "is_relative_to" in dir(it):
return it.is_relative_to(other)
return str(it.absolute()).startswith(str(other.absolute()))
prefix = os.environ.get("DOUGAL_ROOT", os.environ.get("HOME", ".")+"/software")
DOUGAL_ROOT = os.environ.get("DOUGAL_ROOT", os.environ.get("HOME", ".")+"/software")
@@ -54,6 +67,10 @@ def files (globspec = None, include_archived = False):
quickly and temporarily “disabling” a survey configuration by renaming
the relevant file.
"""
print("This method is obsolete")
return
tuples = []
if globspec is None:
@@ -87,3 +104,73 @@ def rxflags (flagstr):
for flag in flagstr:
flags |= cases.get(flag, 0)
return flags
def translate_path (file):
"""
Translate a path from a Dougal import directory to an actual
physical path on disk.
Any user files accessible by Dougal must be under a path prefixed
by `(config.yaml).imports.paths`. The value of `imports.paths` may
be either a string, in which case this represents the prefix under
which all Dougal data resides, or a dictionary where the keys are
logical paths and their values the corresponding physical path.
"""
cfg = read()
root = pathlib.Path(DOUGAL_ROOT)
filepath = pathlib.Path(file).resolve()
import_paths = cfg["imports"]["paths"]
if filepath.is_absolute():
if type(import_paths) == str:
# Substitute the root for the real physical path
# NOTE: `root` deals with import_paths not being absolute
prefix = root.joinpath(pathlib.Path(import_paths)).resolve()
return str(pathlib.Path(prefix).joinpath(*filepath.parts[2:]))
else:
# Look for a match on the second path element
if filepath.parts[1] in import_paths:
# NOTE: `root` deals with import_paths[…] not being absolute
prefix = root.joinpath(import_paths[filepath.parts[1]])
return str(pathlib.Path(prefix).joinpath(*filepath.parts[2:]))
else:
# This path is invalid
raise TypeError("invalid path or file: {0!r}".format(filepath))
else:
# A relative filepath is always resolved relative to the logical root
root = pathlib.Path("/")
return translate_path(root.joinpath(filepath))
def untranslate_path (file):
"""
Attempt to convert a physical path into a logical one.
See `translate_path()` above for details.
"""
cfg = read()
dougal_root = pathlib.Path(DOUGAL_ROOT)
filepath = pathlib.Path(file).resolve()
import_paths = cfg["imports"]["paths"]
physical_root = pathlib.Path("/")
if filepath.is_absolute():
if type(import_paths) == str:
if is_relative_to(filepath, import_paths):
physical_root = pathlib.Path("/")
physical_prefix = pathlib.Path(import_paths)
return str(root.joinpath(filepath.relative_to(physical_prefix)))
else:
raise TypeError("invalid path or file: {0!r}".format(filepath))
else:
for key, value in import_paths.items():
value = dougal_root.joinpath(value)
physical_prefix = pathlib.Path(value)
if is_relative_to(filepath, physical_prefix):
logical_prefix = physical_root.joinpath(pathlib.Path(key)).resolve()
return str(logical_prefix.joinpath(filepath.relative_to(physical_prefix)))
# If we got here with no matches, this is not a valid
# Dougal data path
raise TypeError("invalid path or file: {0!r}".format(filepath))
else:
# A relative filepath is always resolved relative to DOUGAL_ROOT
return untranslate_path(root.joinpath(filepath))

View File

@@ -10,7 +10,7 @@
# be known to the database.
# * PROJECT_NAME is a more descriptive name for human consumption.
# * EPSG_CODE is the EPSG code identifying the CRS for the grid data in the
# navigation files, e.g., 32031.
# navigation files, e.g., 23031.
#
# In addition to this, certain other parameters may be controlled via
# environment variables:

26
bin/daily_tasks.py Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/python3
"""
Do daily housekeeping on the database.
This is meant to run shortly after midnight every day.
"""
import configuration
from datastore import Datastore
if __name__ == '__main__':
print("Connecting to database")
db = Datastore()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
print(f'Survey: {survey["id"]} ({survey["schema"]})')
db.set_survey(survey["schema"])
print("Daily tasks")
db.run_daily_tasks()
print("Done")

View File

@@ -4,6 +4,7 @@ import psycopg2
import configuration
import preplots
import p111
from hashlib import md5 # Because it's good enough
"""
Interface to the PostgreSQL database.
@@ -11,13 +12,16 @@ Interface to the PostgreSQL database.
def file_hash(file):
"""
Calculate a file hash based on its size, inode, modification and creation times.
Calculate a file hash based on its name, size, modification and creation times.
The hash is used to uniquely identify files in the database and detect if they
have changed.
"""
h = md5()
h.update(file.encode())
name_digest = h.hexdigest()[:16]
st = os.stat(file)
return ":".join([str(v) for v in [st.st_size, st.st_mtime, st.st_ctime, st.st_ino]])
return ":".join([str(v) for v in [st.st_size, st.st_mtime, st.st_ctime, name_digest]])
class Datastore:
"""
@@ -48,7 +52,7 @@ class Datastore:
self.conn = psycopg2.connect(configuration.read()["db"]["connection_string"], **opts)
def set_autocommit(value = True):
def set_autocommit(self, value = True):
"""
Enable or disable autocommit.
@@ -91,7 +95,7 @@ class Datastore:
cursor.execute(qry, (filepath,))
results = cursor.fetchall()
if len(results):
return (filepath, file_hash(filepath)) in results
return (filepath, file_hash(configuration.translate_path(filepath))) in results
def add_file(self, path, cursor = None):
@@ -103,7 +107,8 @@ class Datastore:
else:
cur = cursor
hash = file_hash(path)
realpath = configuration.translate_path(path)
hash = file_hash(realpath)
qry = "CALL add_file(%s, %s);"
cur.execute(qry, (path, hash))
if cursor is None:
@@ -172,7 +177,7 @@ class Datastore:
else:
cur = cursor
hash = file_hash(path)
hash = file_hash(configuration.translate_path(path))
qry = """
UPDATE raw_lines rl
SET ntbp = %s
@@ -251,6 +256,78 @@ class Datastore:
self.maybe_commit()
def save_preplot_line_info(self, lines, filepath, filedata = None):
"""
Save preplot line information
Arguments:
lines (iterable): should be a collection of lines returned from
one of the line info reading functions (see preplots.py).
filepath (string): the full path to the preplot file from where the lines
have been read. It will be added to the survey's `file` table so that
it can be monitored for changes.
"""
with self.conn.cursor() as cursor:
cursor.execute("BEGIN;")
# Check which preplot lines we actually have already imported,
# as the line info file may contain extra lines.
qry = """
SELECT line, class
FROM preplot_lines
ORDER BY line, class;
"""
cursor.execute(qry)
preplot_lines = cursor.fetchall()
hash = self.add_file(filepath, cursor)
count=0
for line in lines:
count += 1
if not (line["sail_line"], "V") in preplot_lines:
print(f"\u001b[2KSkipping line {count} / {len(lines)}", end="\n", flush=True)
continue
print(f"\u001b[2KSaving line {count} / {len(lines)} ", end="\n", flush=True)
sail_line = line["sail_line"]
incr = line.get("incr", True)
ntba = line.get("ntba", False)
remarks = line.get("remarks", None)
meta = json.dumps(line.get("meta", {}))
source_lines = line.get("source_line", [])
for source_line in source_lines:
qry = """
INSERT INTO preplot_saillines AS ps
(sailline, line, sailline_class, line_class, incr, ntba, remarks, meta, hash)
VALUES
(%s, %s, 'V', 'S', %s, %s, %s, %s, %s)
ON CONFLICT (sailline, sailline_class, line, line_class, incr) DO UPDATE
SET
incr = EXCLUDED.incr,
ntba = EXCLUDED.ntba,
remarks = COALESCE(EXCLUDED.remarks, ps.remarks),
meta = ps.meta || EXCLUDED.meta,
hash = EXCLUDED.hash;
"""
# NOTE Consider using cursor.executemany() instead. Then again,
# we're only expecting a few hundred lines at most.
cursor.execute(qry, (sail_line, source_line, incr, ntba, remarks, meta, hash))
if filedata is not None:
self.save_file_data(filepath, json.dumps(filedata), cursor)
self.maybe_commit()
def save_raw_p190(self, records, fileinfo, filepath, epsg = 0, filedata = None, ntbp = False):
"""
Save raw P1 data.
@@ -390,9 +467,9 @@ class Datastore:
with self.conn.cursor() as cursor:
cursor.execute("BEGIN;")
hash = self.add_file(filepath, cursor)
if not records or len(records) == 0:
print("File has no records (or none have been detected)")
# We add the file to the database anyway to signal that we have
@@ -406,12 +483,24 @@ class Datastore:
self.del_hash("*online*", cursor)
qry = """
INSERT INTO raw_lines (sequence, line, remarks, ntbp, incr)
VALUES (%s, %s, '', %s, %s)
ON CONFLICT DO NOTHING;
INSERT INTO raw_lines (sequence, line, remarks, ntbp, incr, meta)
VALUES (%s, %s, '', %s, %s, %s)
ON CONFLICT (sequence) DO UPDATE SET
line = EXCLUDED.line,
ntbp = EXCLUDED.ntbp,
incr = EXCLUDED.incr,
meta = EXCLUDED.meta;
"""
cursor.execute(qry, (fileinfo["sequence"], fileinfo["line"], ntbp, incr))
cursor.execute(qry, (fileinfo["sequence"], fileinfo["line"], ntbp, incr, json.dumps(fileinfo["meta"])))
qry = """
UPDATE raw_lines
SET meta = meta || %s
WHERE sequence = %s;
"""
cursor.execute(qry, (json.dumps(fileinfo["meta"]), fileinfo["sequence"]))
qry = """
INSERT INTO raw_lines_files (sequence, hash)
@@ -444,16 +533,26 @@ class Datastore:
with self.conn.cursor() as cursor:
cursor.execute("BEGIN;")
hash = self.add_file(filepath, cursor)
qry = """
INSERT INTO final_lines (sequence, line, remarks)
VALUES (%s, %s, '')
ON CONFLICT DO NOTHING;
INSERT INTO final_lines (sequence, line, remarks, meta)
VALUES (%s, %s, '', %s)
ON CONFLICT (sequence) DO UPDATE SET
line = EXCLUDED.line,
meta = EXCLUDED.meta;
"""
cursor.execute(qry, (fileinfo["sequence"], fileinfo["line"]))
cursor.execute(qry, (fileinfo["sequence"], fileinfo["line"], json.dumps(fileinfo["meta"])))
qry = """
UPDATE raw_lines
SET meta = meta || %s
WHERE sequence = %s;
"""
cursor.execute(qry, (json.dumps(fileinfo["meta"]), fileinfo["sequence"]))
qry = """
INSERT INTO final_lines_files (sequence, hash)
@@ -480,6 +579,8 @@ class Datastore:
if filedata is not None:
self.save_file_data(filepath, json.dumps(filedata), cursor)
cursor.execute("CALL final_line_post_import(%s);", (fileinfo["sequence"],))
self.maybe_commit()
def save_raw_smsrc (self, records, fileinfo, filepath, filedata = None):
@@ -514,7 +615,7 @@ class Datastore:
qry = """
UPDATE raw_shots
SET meta = jsonb_set(meta, '{smsrc}', %s::jsonb, true)
SET meta = jsonb_set(meta, '{smsrc}', %s::jsonb, true) - 'qc'
WHERE sequence = %s AND point = %s;
"""
@@ -560,7 +661,68 @@ class Datastore:
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
def get_file_data(self, path, cursor = None):
"""
Retrieve arbitrary data associated with a file.
"""
if cursor is None:
cur = self.conn.cursor()
else:
cur = cursor
realpath = configuration.translate_path(path)
hash = file_hash(realpath)
qry = """
SELECT data
FROM file_data
WHERE hash = %s;
"""
cur.execute(qry, (hash,))
res = cur.fetchone()
if cursor is None:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
return res[0]
def surveys (self, include_archived = False):
"""
Return list of survey definitions.
"""
if self.conn is None:
self.connect()
if include_archived:
qry = """
SELECT meta, schema
FROM public.projects;
"""
else:
qry = """
SELECT meta, schema
FROM public.projects
WHERE NOT (meta->'archived')::boolean IS true
"""
with self.conn:
with self.conn.cursor() as cursor:
cursor.execute(qry)
results = cursor.fetchall()
surveys = []
for r in results:
if r[0]:
r[0]['schema'] = r[1]
surveys.append(r[0])
return surveys
# TODO Does this need tweaking on account of #246?
def apply_survey_configuration(self, cursor = None):
if cursor is None:
cur = self.conn.cursor()
@@ -639,3 +801,73 @@ class Datastore:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
def del_sequence_final(self, sequence, cursor = None):
"""
Remove final data for a sequence.
"""
if cursor is None:
cur = self.conn.cursor()
else:
cur = cursor
qry = "DELETE FROM files WHERE hash = (SELECT hash FROM final_lines_files WHERE sequence = %s);"
cur.execute(qry, (sequence,))
if cursor is None:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
def adjust_planner(self, cursor = None):
"""
Adjust estimated times on the planner
"""
if cursor is None:
cur = self.conn.cursor()
else:
cur = cursor
qry = "CALL adjust_planner();"
cur.execute(qry)
if cursor is None:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
def housekeep_event_log(self, cursor = None):
"""
Call housekeeping actions on the event log
"""
if cursor is None:
cur = self.conn.cursor()
else:
cur = cursor
qry = "CALL augment_event_data();"
cur.execute(qry)
qry = "CALL scan_placeholders();"
cur.execute(qry)
if cursor is None:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction
def run_daily_tasks(self, cursor = None):
"""
Run once-a-day tasks
"""
if cursor is None:
cur = self.conn.cursor()
else:
cur = cursor
qry = "CALL log_midnight_shots();"
cur.execute(qry)
if cursor is None:
self.maybe_commit()
# We do not commit if we've been passed a cursor, instead
# we assume that we are in the middle of a transaction

163
bin/delimited.py Normal file
View File

@@ -0,0 +1,163 @@
#!/usr/bin/python3
"""
Delimited record importing functions.
"""
import csv
import builtins
def to_bool (v):
try:
return bool(int(v))
except ValueError:
if type(v) == str:
return v.strip().lower().startswith("t")
return False
transform = {
"int": lambda v: builtins.int(float(v)),
"float": float,
"string": str,
"bool": to_bool
}
def cast_values (row, fields):
def enum_for (key):
field = fields.get(key, {})
def enum (val):
if "enum" in field:
ret_val = field.get("default", val)
enums = field.get("enum", [])
for enum_key in enums:
if enum_key == val:
ret_val = enums[enum_key]
return ret_val
return val
return enum
# Get rid of any unwanted data
if None in row:
del(row[None])
for key in row:
val = row[key]
enum = enum_for(key)
transformer = transform.get(fields.get(key, {}).get("type"), str)
if type(val) == list:
for i, v in enumerate(val):
row[key][i] = transformer(enum(v))
elif type(val) == dict:
continue
else:
row[key] = transformer(enum(val))
return row
def build_fieldnames (spec): #(arr, key, val):
fieldnames = []
if "fields" in spec:
for key in spec["fields"]:
index = spec["fields"][key]["column"]
try:
fieldnames[index] = key
except IndexError:
assert index >= 0
fieldnames.extend(((index + 1) - len(fieldnames)) * [None])
fieldnames[index] = key
return fieldnames
def from_file_delimited (path, spec):
fieldnames = build_fieldnames(spec)
fields = spec.get("fields", [])
delimiter = spec.get("delimiter", ",")
firstRow = spec.get("firstRow", 0)
headerRow = spec.get("headerRow", False)
if headerRow:
firstRow += 1
records = []
with open(path, "r", errors="ignore") as fd:
if spec.get("type") == "x-sl+csv":
fieldnames = None # Pick from header row
firstRow = 0
reader = csv.DictReader(fd, delimiter=delimiter)
else:
reader = csv.DictReader(fd, fieldnames=fieldnames, delimiter=delimiter)
row = 0
for line in reader:
skip = False
if row < firstRow:
skip = True
if not skip:
records.append(cast_values(dict(line), fields))
row += 1
return records
def remap (line, headers):
row = dict()
for i, key in enumerate(headers):
if "." in key[1:-1]:
# This is an object
k, attr = key.split(".")
if not k in row:
row[k] = dict()
row[k][attr] = line[i]
elif key in row:
if type(row[key]) == list:
row[key].append(line[i])
else:
row[key] = [ row[key], line[i] ]
else:
row[key] = line[i]
return row
def from_file_saillines (path, spec):
fields = {
"sail_line": { "type": "int" },
"source_line": { "type": "int" },
"incr": { "type": "bool" },
"ntba": { "type": "bool" }
}
# fields = spec.get("fields", sl_fields)
delimiter = spec.get("delimiter", ",")
firstRow = spec.get("firstRow", 0)
records = []
with open(path, "r", errors="ignore") as fd:
row = 0
reader = csv.reader(fd, delimiter=delimiter)
while row < firstRow:
next(reader)
row += 1
headers = [ h.strip() for h in next(reader) if len(h.strip()) ]
for line in reader:
records.append(cast_values(remap(line, headers), fields))
return records
def from_file_p111 (path, spec):
pass
def from_file (path, spec):
if spec.get("type") == "x-sl+csv":
return from_file_saillines(path, spec)
else:
return from_file_delimited(path, spec)

128
bin/fwr.py Normal file
View File

@@ -0,0 +1,128 @@
#!/usr/bin/python3
"""
Fixed width record importing functions.
"""
import builtins
def to_bool (v):
try:
return bool(int(v))
except ValueError:
if type(v) == str:
return v.strip().lower().startswith("t")
return False
transform = {
"int": lambda v: builtins.int(float(v)),
"float": float,
"string": str,
"str": str,
"bool": to_bool
}
def parse_line (line, fields, fixed = None):
# print("parse_line", line, fields, fixed)
data = dict()
if fixed:
for value in fixed:
start = value["offset"]
end = start + len(value["text"])
text = line[start:end]
if text != value["text"]:
return f"Expected text `{value['text']}` at position {start} but found `{text}` instead."
for key in fields:
spec = fields[key]
transformer = transform[spec.get("type", "str")]
pos_from = spec["offset"]
pos_to = pos_from + spec["length"]
text = line[pos_from:pos_to]
value = transformer(text)
if "enum" in spec:
if "default" in spec:
value = spec["default"]
for enum_key in spec["enum"]:
if enum_key == text:
enum_value = transformer(spec["enum"][enum_key])
value = enum_value
break
data[key] = value
# print("parse_line data =", data)
return data
specfields = {
"sps1": {
"line_name": { "offset": 1, "length": 16, "type": "int" },
"point_number": { "offset": 17, "length": 8, "type": "int" },
"easting": { "offset": 46, "length": 9, "type": "float" },
"northing": { "offset": 55, "length": 10, "type": "float" }
},
"sps21": {
"line_name": { "offset": 1, "length": 7, "type": "int" },
"point_number": { "offset": 11, "length": 7, "type": "int" },
"easting": { "offset": 46, "length": 9, "type": "float" },
"northing": { "offset": 55, "length": 10, "type": "float" }
},
"p190": {
"line_name": { "offset": 1, "length": 12, "type": "int" },
"point_number": { "offset": 19, "length": 6, "type": "int" },
"easting": { "offset": 46, "length": 9, "type": "float" },
"northing": { "offset": 55, "length": 9, "type": "float" }
},
}
def from_file(path, spec):
# If spec.fields is not present, deduce it from spec.type ("sps1", "sps21", "p190", etc.)
if "fields" in spec:
fields = spec["fields"]
elif "type" in spec and spec["type"] in specfields:
fields = specfields[spec["type"]]
else:
# TODO: Should default to looking for spec.format and doing a legacy import on it
return "Neither 'type' nor 'fields' given. I don't know how to import this fixed-width dataset."
firstRow = spec.get("firstRow", 0)
skipStart = [] # Skip lines starting with any of these values
skipMatch = [] # Skip lines matching any of these values
if "type" in spec:
if spec["type"] == "sps1" or spec["type"] == "sps21" or spec["type"] == "p190":
skipStart = "H"
skipMatch = "EOF"
records = []
with open(path, "r", errors="ignore") as fd:
row = 0
line = fd.readline()
while line:
skip = False
if row < firstRow:
skip = True
if not skip:
for v in skipStart:
if line.startswith(v):
skip = True
break
for v in skipMatch:
if line == v:
skip = True
break
if not skip:
records.append(parse_line(line, fields))
row += 1
line = fd.readline()
return records

26
bin/housekeep_database.py Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/python3
"""
Do housekeeping actions on the database.
"""
import configuration
from datastore import Datastore
if __name__ == '__main__':
print("Connecting to database")
db = Datastore()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
print(f'Survey: {survey["id"]} ({survey["schema"]})')
db.set_survey(survey["schema"])
print("Planner adjustment")
db.adjust_planner()
print("Event log housekeeping")
db.housekeep_event_log()
print("Done")

View File

@@ -59,7 +59,7 @@ def qc_data (cursor, prefix):
else:
print("No QC data found");
return
#print("QC", qc)
index = 0
for item in qc["results"]:

View File

@@ -39,7 +39,7 @@ def seis_data (survey):
if not pathlib.Path(pathPrefix).exists():
print(pathPrefix)
raise ValueError("Export path does not exist")
print(f"Requesting sequences for {survey['id']}")
url = f"http://localhost:3000/api/project/{survey['id']}/sequence"
r = requests.get(url)
@@ -47,12 +47,12 @@ def seis_data (survey):
for sequence in r.json():
if sequence['status'] not in ["final", "ntbp"]:
continue
filename = pathlib.Path(pathPrefix, "sequence{:0>3d}.json".format(sequence['sequence']))
if filename.exists():
print(f"Skipping export for sequence {sequence['sequence']} file already exists")
continue
print(f"Processing sequence {sequence['sequence']}")
url = f"http://localhost:3000/api/project/{survey['id']}/event?sequence={sequence['sequence']}&missing=t"
headers = { "Accept": "application/vnd.seis+json" }

View File

@@ -15,17 +15,48 @@ import re
import time
import configuration
import p111
import fwr
from datastore import Datastore
def add_pending_remark(db, sequence):
text = '<!-- @@DGL:PENDING@@ --><h4 style="color:red;cursor:help;" title="Edit the sequence file or directory name to import final data">Marked as <code>PENDING</code>.</h4><!-- @@/DGL:PENDING@@ -->\n'
with db.conn.cursor() as cursor:
qry = "SELECT remarks FROM raw_lines WHERE sequence = %s;"
cursor.execute(qry, (sequence,))
remarks = cursor.fetchone()[0]
rx = re.compile("^(<!-- @@DGL:PENDING@@ -->.*<!-- @@/DGL:PENDING@@ -->\n)")
m = rx.match(remarks)
if m is None:
remarks = text + remarks
qry = "UPDATE raw_lines SET remarks = %s WHERE sequence = %s;"
cursor.execute(qry, (remarks, sequence))
db.maybe_commit()
def del_pending_remark(db, sequence):
with db.conn.cursor() as cursor:
qry = "SELECT remarks FROM raw_lines WHERE sequence = %s;"
cursor.execute(qry, (sequence,))
row = cursor.fetchone()
if row is not None:
remarks = row[0]
rx = re.compile("^(<!-- @@DGL:PENDING@@ -->.*<!-- @@/DGL:PENDING@@ -->\n)")
m = rx.match(remarks)
if m is not None:
remarks = rx.sub("",remarks)
qry = "UPDATE raw_lines SET remarks = %s WHERE sequence = %s;"
cursor.execute(qry, (remarks, sequence))
db.maybe_commit()
if __name__ == '__main__':
print("Reading configuration")
surveys = configuration.surveys()
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
print("Connecting to database")
db = Datastore()
db.connect()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
@@ -39,44 +70,100 @@ if __name__ == '__main__':
print("No final P1/11 configuration")
exit(0)
pattern = final_p111["pattern"]
rx = re.compile(pattern["regex"])
lineNameInfo = final_p111.get("lineNameInfo")
pattern = final_p111.get("pattern")
if not lineNameInfo:
if not pattern:
print("ERROR! Missing final.p111.lineNameInfo in project configuration. Cannot import final P111")
raise Exception("Missing final.p111.lineNameInfo")
else:
print("WARNING! No `lineNameInfo` in project configuration (final.p111). You should add it to the settings.")
rx = None
if pattern and pattern.get("regex"):
rx = re.compile(pattern["regex"])
if "pending" in survey["final"]:
pendingRx = re.compile(survey["final"]["pending"]["pattern"]["regex"])
for fileprefix in final_p111["paths"]:
print(f"Path prefix: {fileprefix}")
realprefix = configuration.translate_path(fileprefix)
print(f"Path prefix: {fileprefix}{realprefix}")
for globspec in final_p111["globs"]:
for filepath in pathlib.Path(fileprefix).glob(globspec):
filepath = str(filepath)
print(f"Found {filepath}")
for physical_filepath in pathlib.Path(realprefix).glob(globspec):
physical_filepath = str(physical_filepath)
logical_filepath = configuration.untranslate_path(physical_filepath)
print(f"Found {logical_filepath}")
if not db.file_in_db(filepath):
age = time.time() - os.path.getmtime(filepath)
pending = False
if pendingRx:
pending = pendingRx.search(physical_filepath) is not None
if not db.file_in_db(logical_filepath):
age = time.time() - os.path.getmtime(physical_filepath)
if age < file_min_age:
print("Skipping file because too new", filepath)
print("Skipping file because too new", logical_filepath)
continue
print("Importing")
match = rx.match(os.path.basename(filepath))
if not match:
error_message = f"File path not match the expected format! ({filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
if rx:
match = rx.match(os.path.basename(logical_filepath))
if not match:
error_message = f"File path not match the expected format! ({logical_filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = dict(zip(pattern["captures"], match.groups()))
file_info["meta"] = {}
if lineNameInfo:
basename = os.path.basename(physical_filepath)
fields = lineNameInfo.get("fields", {})
fixed = lineNameInfo.get("fixed")
try:
parsed_line = fwr.parse_line(basename, fields, fixed)
except ValueError as err:
parsed_line = "Line format error: " + str(err)
if type(parsed_line) == str:
print(parsed_line, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = {}
file_info["sequence"] = parsed_line["sequence"]
file_info["line"] = parsed_line["line"]
del(parsed_line["sequence"])
del(parsed_line["line"])
file_info["meta"] = {
"fileInfo": parsed_line
}
if pending:
print("Skipping / removing final file because marked as PENDING", logical_filepath)
db.del_sequence_final(file_info["sequence"])
add_pending_remark(db, file_info["sequence"])
continue
else:
del_pending_remark(db, file_info["sequence"])
file_info = dict(zip(pattern["captures"], match.groups()))
p111_data = p111.from_file(filepath)
p111_data = p111.from_file(physical_filepath)
print("Saving")
p111_records = p111.p111_type("S", p111_data)
file_info["meta"]["lineName"] = p111.line_name(p111_data)
db.save_final_p111(p111_records, file_info, filepath, survey["epsg"])
db.save_final_p111(p111_records, file_info, logical_filepath, survey["epsg"])
else:
print("Already in DB")
if pending:
print("Removing from database because marked as PENDING")
db.del_sequence_final(file_info["sequence"])
add_pending_remark(db, file_info["sequence"])
print("Done")

View File

@@ -51,12 +51,12 @@ if __name__ == '__main__':
print(f"Found {filepath}")
if not db.file_in_db(filepath):
age = time.time() - os.path.getmtime(filepath)
if age < file_min_age:
print("Skipping file because too new", filepath)
continue
print("Importing")
match = rx.match(os.path.basename(filepath))

127
bin/import_map_layers.py Executable file
View File

@@ -0,0 +1,127 @@
#!/usr/bin/python3
"""
Import SmartSource data.
For each survey in configuration.surveys(), check for new
or modified final gun header files and (re-)import them into the
database.
"""
import os
import sys
import pathlib
import re
import time
import json
import configuration
from datastore import Datastore
if __name__ == '__main__':
"""
Imports map layers from the directories defined in the configuration object
`import.map.layers`. The content of that key is an object with the following
structure:
{
layer1Name: [
format: "geojson",
path: "", // Logical path to a directory
globs: [
"**/*.geojson", // List of globs matching map data files
]
],
layer2Name: …
}
"""
def process (layer_name, layer, physical_filepath):
physical_filepath = str(physical_filepath)
logical_filepath = configuration.untranslate_path(physical_filepath)
print(f"Found {logical_filepath}")
if not db.file_in_db(logical_filepath):
age = time.time() - os.path.getmtime(physical_filepath)
if age < file_min_age:
print("Skipping file because too new", logical_filepath)
return
print("Importing")
file_info = {
"type": "map_layer",
"format": layer["format"],
"name": layer_name,
"tooltip": layer.get("tooltip"),
"popup": layer.get("popup")
}
db.save_file_data(logical_filepath, json.dumps(file_info))
else:
file_info = db.get_file_data(logical_filepath)
dirty = False
if file_info:
if file_info["name"] != layer_name:
print("Renaming to", layer_name)
file_info["name"] = layer_name
dirty = True
if file_info.get("tooltip") != layer.get("tooltip"):
print("Changing tooltip to", layer.get("tooltip") or "null")
file_info["tooltip"] = layer.get("tooltip")
dirty = True
if file_info.get("popup") != layer.get("popup"):
print("Changing popup to", layer.get("popup") or "null")
file_info["popup"] = layer.get("popup")
dirty = True
if dirty:
db.save_file_data(logical_filepath, json.dumps(file_info))
else:
print("Already in DB")
print("Reading configuration")
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
print("Connecting to database")
db = Datastore()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
print(f'Survey: {survey["id"]} ({survey["schema"]})')
db.set_survey(survey["schema"])
try:
map_layers = survey["imports"]["map"]["layers"]
except KeyError:
print("No map layers defined")
continue
for layer_name, layer_items in map_layers.items():
for layer in layer_items:
fileprefix = layer["path"]
realprefix = configuration.translate_path(fileprefix)
if os.path.isfile(realprefix):
process(layer_name, layer, realprefix)
elif os.path.isdir(realprefix):
if not "globs" in layer:
layer["globs"] = [ "**/*.geojson" ]
for globspec in layer["globs"]:
for physical_filepath in pathlib.Path(realprefix).glob(globspec):
process(layer_name, layer, physical_filepath)
print("Done")

View File

@@ -15,38 +15,52 @@ import configuration
import preplots
from datastore import Datastore
if __name__ == '__main__':
def preplots_sorter (preplot):
rank = {
"x-sl+csv": 10
}
return rank.get(preplot.get("type"), 0)
print("Reading configuration")
surveys = configuration.surveys()
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
if __name__ == '__main__':
print("Connecting to database")
db = Datastore()
surveys = db.surveys()
print("Reading configuration")
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
print("Reading surveys")
for survey in surveys:
print(f'Survey: {survey["id"]} ({survey["schema"]})')
db.set_survey(survey["schema"])
for file in survey["preplots"]:
# We sort the preplots so that ancillary line info always comes last,
# after the actual line + point data has been imported
for file in sorted(survey["preplots"], key=preplots_sorter):
realpath = configuration.translate_path(file["path"])
print(f"Preplot: {file['path']}")
if not db.file_in_db(file["path"]):
age = time.time() - os.path.getmtime(file["path"])
age = time.time() - os.path.getmtime(realpath)
if age < file_min_age:
print("Skipping file because too new", file["path"])
continue
print("Importing")
try:
preplot = preplots.from_file(file)
preplot = preplots.from_file(file, realpath)
except FileNotFoundError:
print(f"File does not exist: {file['path']}", file=sys.stderr)
continue
if type(preplot) is list:
print("Saving to DB")
db.save_preplots(preplot, file["path"], file["class"], survey["epsg"], file)
if file.get("type") == "x-sl+csv":
db.save_preplot_line_info(preplot, file["path"], file)
else:
db.save_preplots(preplot, file["path"], file["class"], survey["epsg"], file)
elif type(preplot) is str:
print(preplot)
else:

View File

@@ -15,17 +15,17 @@ import re
import time
import configuration
import p111
import fwr
from datastore import Datastore
if __name__ == '__main__':
print("Reading configuration")
surveys = configuration.surveys()
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
print("Connecting to database")
db = Datastore()
db.connect()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
@@ -39,56 +39,95 @@ if __name__ == '__main__':
print("No raw P1/11 configuration")
exit(0)
pattern = raw_p111["pattern"]
rx = re.compile(pattern["regex"])
lineNameInfo = raw_p111.get("lineNameInfo")
pattern = raw_p111.get("pattern")
if not lineNameInfo:
if not pattern:
print("ERROR! Missing raw.p111.lineNameInfo in project configuration. Cannot import raw P111")
raise Exception("Missing raw.p111.lineNameInfo")
else:
print("WARNING! No `lineNameInfo` in project configuration (raw.p111). You should add it to the settings.")
rx = None
if pattern and pattern.get("regex"):
rx = re.compile(pattern["regex"])
if "ntbp" in survey["raw"]:
ntbpRx = re.compile(survey["raw"]["ntbp"]["pattern"]["regex"])
for fileprefix in raw_p111["paths"]:
print(f"Path prefix: {fileprefix}")
realprefix = configuration.translate_path(fileprefix)
print(f"Path prefix: {fileprefix}{realprefix}")
for globspec in raw_p111["globs"]:
for filepath in pathlib.Path(fileprefix).glob(globspec):
filepath = str(filepath)
print(f"Found {filepath}")
for physical_filepath in pathlib.Path(realprefix).glob(globspec):
physical_filepath = str(physical_filepath)
logical_filepath = configuration.untranslate_path(physical_filepath)
print(f"Found {logical_filepath}")
if ntbpRx:
ntbp = ntbpRx.search(filepath) is not None
ntbp = ntbpRx.search(physical_filepath) is not None
else:
ntbp = False
if not db.file_in_db(filepath):
age = time.time() - os.path.getmtime(filepath)
if not db.file_in_db(logical_filepath):
age = time.time() - os.path.getmtime(physical_filepath)
if age < file_min_age:
print("Skipping file because too new", filepath)
print("Skipping file because too new", logical_filepath)
continue
print("Importing")
match = rx.match(os.path.basename(filepath))
if not match:
error_message = f"File path not match the expected format! ({filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
continue
if rx:
match = rx.match(os.path.basename(logical_filepath))
if not match:
error_message = f"File path not matching the expected format! ({logical_filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = dict(zip(pattern["captures"], match.groups()))
file_info = dict(zip(pattern["captures"], match.groups()))
file_info["meta"] = {}
p111_data = p111.from_file(filepath)
if lineNameInfo:
basename = os.path.basename(physical_filepath)
fields = lineNameInfo.get("fields", {})
fixed = lineNameInfo.get("fixed")
try:
parsed_line = fwr.parse_line(basename, fields, fixed)
except ValueError as err:
parsed_line = "Line format error: " + str(err)
if type(parsed_line) == str:
print(parsed_line, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = {}
file_info["sequence"] = parsed_line["sequence"]
file_info["line"] = parsed_line["line"]
del(parsed_line["sequence"])
del(parsed_line["line"])
file_info["meta"] = {
"fileInfo": parsed_line
}
p111_data = p111.from_file(physical_filepath)
print("Saving")
p111_records = p111.p111_type("S", p111_data)
if len(p111_records):
file_info["meta"]["lineName"] = p111.line_name(p111_data)
db.save_raw_p111(p111_records, file_info, filepath, survey["epsg"], ntbp=ntbp)
db.save_raw_p111(p111_records, file_info, logical_filepath, survey["epsg"], ntbp=ntbp)
else:
print("No source records found in file")
else:
print("Already in DB")
# Update the NTBP status to whatever the latest is,
# as it might have changed.
db.set_ntbp(filepath, ntbp)
db.set_ntbp(logical_filepath, ntbp)
if ntbp:
print("Sequence is NTBP")

View File

@@ -54,12 +54,12 @@ if __name__ == '__main__':
print(f"Found {filepath}")
if not db.file_in_db(filepath):
age = time.time() - os.path.getmtime(filepath)
if age < file_min_age:
print("Skipping file because too new", filepath)
continue
print("Importing")
match = rx.match(os.path.basename(filepath))

View File

@@ -15,17 +15,17 @@ import re
import time
import configuration
import smsrc
import fwr
from datastore import Datastore
if __name__ == '__main__':
print("Reading configuration")
surveys = configuration.surveys()
file_min_age = configuration.read().get('imports', {}).get('file_min_age', 10)
print("Connecting to database")
db = Datastore()
db.connect()
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
@@ -34,49 +34,80 @@ if __name__ == '__main__':
db.set_survey(survey["schema"])
try:
raw_smsrc = survey["raw"]["smsrc"]
raw_smsrc = survey["raw"]["source"]["smsrc"]["header"]
except KeyError:
print("No SmartSource data configuration")
continue
flags = 0
if "flags" in raw_smsrc:
configuration.rxflags(raw_smsrc["flags"])
# NOTE I've no idea what this is 🤔
# flags = 0
# if "flags" in raw_smsrc:
# configuration.rxflags(raw_smsrc["flags"])
pattern = raw_smsrc["pattern"]
rx = re.compile(pattern["regex"], flags)
lineNameInfo = raw_smsrc.get("lineNameInfo")
pattern = raw_smsrc.get("pattern")
rx = None
if pattern and pattern.get("regex"):
rx = re.compile(pattern["regex"])
for fileprefix in raw_smsrc["paths"]:
print(f"Path prefix: {fileprefix}")
realprefix = configuration.translate_path(fileprefix)
print(f"Path prefix: {fileprefix}{realprefix}")
for globspec in raw_smsrc["globs"]:
for filepath in pathlib.Path(fileprefix).glob(globspec):
filepath = str(filepath)
print(f"Found {filepath}")
for physical_filepath in pathlib.Path(realprefix).glob(globspec):
physical_filepath = str(physical_filepath)
logical_filepath = configuration.untranslate_path(physical_filepath)
print(f"Found {logical_filepath}")
if not db.file_in_db(filepath):
age = time.time() - os.path.getmtime(filepath)
if not db.file_in_db(logical_filepath):
age = time.time() - os.path.getmtime(physical_filepath)
if age < file_min_age:
print("Skipping file because too new", filepath)
print("Skipping file because too new", logical_filepath)
continue
print("Importing")
match = rx.match(os.path.basename(filepath))
if not match:
error_message = f"File path not matching the expected format! ({filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
continue
if rx:
match = rx.match(os.path.basename(logical_filepath))
if not match:
error_message = f"File path not matching the expected format! ({logical_filepath} ~ {pattern['regex']})"
print(error_message, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = dict(zip(pattern["captures"], match.groups()))
file_info = dict(zip(pattern["captures"], match.groups()))
file_info["meta"] = {}
smsrc_records = smsrc.from_file(filepath)
if lineNameInfo:
basename = os.path.basename(physical_filepath)
fields = lineNameInfo.get("fields", {})
fixed = lineNameInfo.get("fixed")
try:
parsed_line = fwr.parse_line(basename, fields, fixed)
except ValueError as err:
parsed_line = "Line format error: " + str(err)
if type(parsed_line) == str:
print(parsed_line, file=sys.stderr)
print("This file will be ignored!")
continue
file_info = {}
file_info["sequence"] = parsed_line["sequence"]
file_info["line"] = parsed_line["line"]
del(parsed_line["sequence"])
del(parsed_line["line"])
file_info["meta"] = {
"fileInfo": parsed_line
}
smsrc_records = smsrc.from_file(physical_filepath)
print("Saving")
db.save_raw_smsrc(smsrc_records, file_info, filepath)
db.save_raw_smsrc(smsrc_records, file_info, logical_filepath)
else:
print("Already in DB")

View File

@@ -15,25 +15,4 @@ from datastore import Datastore
if __name__ == '__main__':
print("Reading configuration")
configs = configuration.files(include_archived = True)
print("Connecting to database")
db = Datastore()
#db.connect()
print("Reading surveys")
for config in configs:
filepath = config[0]
survey = config[1]
print(f'Survey: {survey["id"]} ({filepath})')
db.set_survey(survey["schema"])
if not db.file_in_db(filepath):
print("Saving to DB")
db.save_file_data(filepath, json.dumps(survey))
print("Applying survey configuration")
db.apply_survey_configuration()
else:
print("Already in DB")
print("Done")
print("This function is obsolete. Returning with no action")

View File

@@ -14,7 +14,7 @@ def detect_schema (conn):
if __name__ == '__main__':
import argparse
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--schema", required=False, default=None, help="survey where to insert the event")
ap.add_argument("-t", "--tstamp", required=False, default=None, help="event timestamp")
@@ -30,19 +30,19 @@ if __name__ == '__main__':
schema = args["schema"]
else:
schema = detect_schema(db.conn)
if args["tstamp"]:
tstamp = args["tstamp"]
else:
tstamp = datetime.utcnow().isoformat()
message = " ".join(args["remarks"])
print("new event:", schema, tstamp, message)
print("new event:", schema, tstamp, message, args["label"])
if schema and tstamp and message:
db.set_survey(schema)
with db.conn.cursor() as cursor:
qry = "INSERT INTO events_timed (tstamp, remarks) VALUES (%s, %s);"
cursor.execute(qry, (tstamp, message))
qry = "INSERT INTO event_log (tstamp, remarks, labels) VALUES (%s, replace_placeholders(%s, %s, NULL, NULL), %s);"
cursor.execute(qry, (tstamp, message, tstamp, args["label"]))
db.maybe_commit()

View File

@@ -7,7 +7,6 @@ P1/11 parsing functions.
import math
import re
from datetime import datetime, timedelta, timezone
from parse_fwr import parse_fwr
def _int (string):
return int(float(string))
@@ -153,6 +152,9 @@ def parse_line (string):
return None
def line_name(records):
return set([ r['Acquisition Line Name'] for r in p111_type("S", records) ]).pop()
def p111_type(type, records):
return [ r for r in records if r["type"] == type ]

View File

@@ -12,7 +12,7 @@ from parse_fwr import parse_fwr
def parse_p190_header (string):
"""Parse a generic P1/90 header record.
Returns a dictionary of fields.
"""
names = [ "record_type", "header_type", "header_type_modifier", "description", "data" ]
@@ -27,7 +27,7 @@ def parse_p190_type1 (string):
"doy", "time", "spare2" ]
record = parse_fwr(string, [1, 12, 3, 1, 1, 1, 6, 10, 11, 9, 9, 6, 3, 6, 1])
return dict(zip(names, record))
def parse_p190_rcv_group (string):
"""Parse a P1/90 Type 1 receiver group record."""
names = [ "record_type",
@@ -37,7 +37,7 @@ def parse_p190_rcv_group (string):
"streamer_id" ]
record = parse_fwr(string, [1, 4, 9, 9, 4, 4, 9, 9, 4, 4, 9, 9, 4, 1])
return dict(zip(names, record))
def parse_line (string):
type = string[0]
if string[:3] == "EOF":
@@ -52,7 +52,7 @@ def parse_line (string):
def p190_type(type, records):
return [ r for r in records if r["record_type"] == type ]
def p190_header(code, records):
return [ h for h in p190_type("H", records) if h["header_type"]+h["header_type_modifier"] == code ]
@@ -86,15 +86,15 @@ def normalise_record(record):
# These are probably strings
elif "strip" in dir(record[key]):
record[key] = record[key].strip()
return record
def normalise(records):
for record in records:
normalise_record(record)
return records
def from_file(path, only_records=None, shot_range=None, with_objrefs=False):
records = []
with open(path) as fd:
@@ -102,10 +102,10 @@ def from_file(path, only_records=None, shot_range=None, with_objrefs=False):
line = fd.readline()
while line:
cnt = cnt + 1
if line == "EOF":
break
record = parse_line(line)
if record is not None:
if only_records:
@@ -121,9 +121,9 @@ def from_file(path, only_records=None, shot_range=None, with_objrefs=False):
records.append(record)
line = fd.readline()
return records
def apply_tstamps(recordset, tstamp=None, fix_bad_seconds=False):
#print("tstamp", tstamp, type(tstamp))
if type(tstamp) is int:
@@ -161,16 +161,16 @@ def apply_tstamps(recordset, tstamp=None, fix_bad_seconds=False):
record["tstamp"] = ts
prev[object_id(record)] = doy
break
return recordset
def dms(value):
# 591544.61N
hemisphere = 1 if value[-1] in "NnEe" else -1
seconds = float(value[-6:-1])
minutes = int(value[-8:-6])
degrees = int(value[:-8])
return (degrees + minutes/60 + seconds/3600) * hemisphere
def tod(record):
@@ -183,7 +183,7 @@ def tod(record):
m = int(time[2:4])
s = float(time[4:])
return d*86400 + h*3600 + m*60 + s
def duration(record0, record1):
ts0 = tod(record0)
ts1 = tod(record1)
@@ -198,10 +198,10 @@ def azimuth(record0, record1):
x0, y0 = float(record0["easting"]), float(record0["northing"])
x1, y1 = float(record1["easting"]), float(record1["northing"])
return math.degrees(math.atan2(x1-x0, y1-y0)) % 360
def speed(record0, record1, knots=False):
scale = 3600/1852 if knots else 1
t0 = tod(record0)
t1 = tod(record1)
return (distance(record0, record1) / math.fabs(t1-t0)) * scale

View File

@@ -1,21 +0,0 @@
#!/usr/bin/python3
def parse_fwr (string, widths, start=0):
"""Parse a fixed-width record.
string: the string to parse.
widths: a list of record widths. A negative width denotes a field to be skipped.
start: optional start index.
Returns a list of strings.
"""
results = []
current_index = start
for width in widths:
if width > 0:
results.append(string[current_index : current_index + width])
current_index += width
else:
current_index -= width
return results

View File

@@ -1,14 +1,51 @@
import sps
import fwr
import delimited
"""
Preplot importing functions.
"""
def from_file (file):
if not "type" in file or file["type"] == "sps":
records = sps.from_file(file["path"], file["format"] if "format" in file else None )
def is_fixed_width (file):
fixed_width_types = [ "sps1", "sps21", "p190", "fixed-width" ]
return type(file) == dict and "type" in file and file["type"] in fixed_width_types
def is_delimited (file):
delimited_types = [ "csv", "p111", "x-sl+csv" ]
return type(file) == dict and "type" in file and file["type"] in delimited_types
def from_file (file, realpath = None):
"""
Return a list of dicts, where each dict has the structure:
{
"line_name": <int>,
"points": [
{
"line_name": <int>,
"point_number": <int>,
"easting": <float>,
"northing": <float>
},
]
}
On error, return a string describing the error condition.
"""
filepath = realpath or file["path"]
if is_fixed_width(file):
records = fwr.from_file(filepath, file)
elif is_delimited(file):
records = delimited.from_file(filepath, file)
else:
return "Not an SPS file"
return "Unrecognised file format"
if type(records) == str:
# This is an error message
return records
if file.get("type") == "x-sl+csv":
return records
lines = []
line_names = set([r["line_name"] for r in records])

View File

@@ -13,21 +13,27 @@ from datastore import Datastore
if __name__ == '__main__':
print("Reading configuration")
surveys = configuration.surveys()
print("Connecting to database")
db = Datastore()
print("Reading configuration")
surveys = db.surveys()
print("Reading surveys")
for survey in surveys:
print(f'Survey: {survey["id"]} ({survey["schema"]})')
db.set_survey(survey["schema"])
for file in db.list_files():
path = file[0]
if not os.path.exists(path):
print(path, "NOT FOUND")
db.del_file(path)
try:
path = configuration.translate_path(file[0])
if not os.path.exists(path):
print(path, "NOT FOUND")
db.del_file(file[0])
except TypeError:
# In case the logical path no longer matches
# the Dougal configuration.
print(file[0], "COULD NOT BE TRANSLATED TO A PHYSICAL PATH. DELETING")
db.del_file(file[0])
print("Done")

View File

@@ -1,5 +1,6 @@
#!/bin/bash
DOUGAL_ROOT=${DOUGAL_ROOT:-$(dirname "$0")/..}
BINDIR="$DOUGAL_ROOT/bin"
@@ -8,6 +9,20 @@ LOCKFILE=${LOCKFILE:-$VARDIR/runner.lock}
[ -f ~/.profile ] && . ~/.profile
DOUGAL_LOG_TAG="dougal.runner[$$]"
# Only send output to the logger if we have the appropriate
# configuration set.
if [[ -n "$DOUGAL_LOG_TAG" && -n "$DOUGAL_LOG_FACILITY" ]]; then
function _logger () {
logger $*
}
else
function _logger () {
: # This is the Bash null command
}
fi
function tstamp () {
date -u +%Y-%m-%dT%H:%M:%SZ
}
@@ -18,26 +33,44 @@ function prefix () {
function print_log () {
printf "$(prefix)\033[36m%s\033[0m\n" "$*"
_logger -t "$DOUGAL_LOG_TAG" -p "$DOUGAL_LOG_FACILITY.info" "$*"
}
function print_info () {
printf "$(prefix)\033[0m%s\n" "$*"
_logger -t "$DOUGAL_LOG_TAG" -p "$DOUGAL_LOG_FACILITY.debug" "$*"
}
function print_warning () {
printf "$(prefix)\033[33;1m%s\033[0m\n" "$*"
_logger -t "$DOUGAL_LOG_TAG" -p "$DOUGAL_LOG_FACILITY.warning" "$*"
}
function print_error () {
printf "$(prefix)\033[31m%s\033[0m\n" "$*"
_logger -t "$DOUGAL_LOG_TAG" -p "$DOUGAL_LOG_FACILITY.error" "$*"
}
function run () {
PROGNAME=$(basename "$1")
PROGNAME=${PROGNAME:-$(basename "$1")}
STDOUTLOG="$VARDIR/$PROGNAME.out"
STDERRLOG="$VARDIR/$PROGNAME.err"
"$1" >"$STDOUTLOG" 2>"$STDERRLOG" || {
# What follows runs the command that we have been given (with any arguments passed)
# and logs:
# * stdout to $STDOUTLOG (a temporary file) and possibly to syslog, if enabled.
# * stderr to $STDERRLOG (a temporary file) and possibly to syslog, if enabled.
#
# When logging to syslog, stdout goes as debug level and stderr as warning (not error)
#
# The temporary file is used in case the command fails, at which point we try to log
# a warning in GitLab's alerts facility.
$* \
> >(tee $STDOUTLOG |_logger -t "dougal.runner.$PROGNAME[$$]" -p "$DOUGAL_LOG_FACILITY.debug") \
2> >(tee $STDERRLOG |_logger -t "dougal.runner.$PROGNAME[$$]" -p "$DOUGAL_LOG_FACILITY.warning") || {
print_error "Failed: $PROGNAME"
cat $STDOUTLOG
cat $STDERRLOG
@@ -52,11 +85,17 @@ function run () {
exit 2
}
# cat $STDOUTLOG
unset PROGNAME
rm $STDOUTLOG $STDERRLOG
}
function cleanup () {
if [[ -f $LOCKFILE ]]; then
rm "$LOCKFILE"
fi
}
if [[ -f $LOCKFILE ]]; then
PID=$(cat "$LOCKFILE")
if pgrep -F "$LOCKFILE"; then
@@ -74,6 +113,13 @@ echo "$$" > "$LOCKFILE" || {
}
print_info "Start run"
print_log "Check if data is accessible"
$BINDIR/check_mounts_present.py || {
print_warning "Import mounts not accessible. Inhibiting all tasks!"
cleanup
exit 253
}
print_log "Purge deleted files"
run $BINDIR/purge_deleted_files.py
@@ -86,33 +132,47 @@ run $BINDIR/import_preplots.py
print_log "Import raw P1/11"
run $BINDIR/import_raw_p111.py
print_log "Import raw P1/90"
run $BINDIR/import_raw_p190.py
#print_log "Import raw P1/90"
#run $BINDIR/import_raw_p190.py
print_log "Import final P1/11"
run $BINDIR/import_final_p111.py
print_log "Import final P1/90"
run $BINDIR/import_final_p190.py
#print_log "Import final P1/90"
#run $BINDIR/import_final_p190.py
print_log "Import SmartSource data"
run $BINDIR/import_smsrc.py
if [[ -z "$RUNNER_NOEXPORT" ]]; then
print_log "Export system data"
run $BINDIR/system_exports.py
fi
print_log "Import map user layers"
run $BINDIR/import_map_layers.py
if [[ -n "$RUNNER_IMPORT" ]]; then
print_log "Import system data"
run $BINDIR/system_imports.py
fi
# if [[ -z "$RUNNER_NOEXPORT" ]]; then
# print_log "Export system data"
# run $BINDIR/system_exports.py
# fi
print_log "Export QC data"
run $BINDIR/human_exports_qc.py
# if [[ -n "$RUNNER_IMPORT" ]]; then
# print_log "Import system data"
# run $BINDIR/system_imports.py
# fi
print_log "Export sequence data"
run $BINDIR/human_exports_seis.py
# print_log "Export QC data"
# run $BINDIR/human_exports_qc.py
# print_log "Export sequence data"
# run $BINDIR/human_exports_seis.py
print_log "Process ASAQC queue"
# Run insecure in test mode:
# export NODE_TLS_REJECT_UNAUTHORIZED=0
PROGNAME=asaqc_queue run $DOUGAL_ROOT/lib/www/server/queues/asaqc/index.js
print_log "Run database housekeeping actions"
run $BINDIR/housekeep_database.py
print_log "Run QCs"
PROGNAME=run_qc run $DOUGAL_ROOT/lib/www/server/lib/qc/index.js
rm "$LOCKFILE"

View File

@@ -1,51 +0,0 @@
#!/usr/bin/python3
"""
SPS importing functions.
And by SPS, we mean more or less any line-delimited, fixed-width record format.
"""
import builtins
from parse_fwr import parse_fwr
def int (v):
return builtins.int(float(v))
def parse_line (string, spec):
"""Parse a line from an SPS file."""
names = spec["names"]
widths = spec["widths"]
normalisers = spec["normalisers"]
record = [ t[0](t[1]) for t in zip(normalisers, parse_fwr(string, widths)) ]
return dict(zip(names, record))
def from_file(path, spec = None):
if spec is None:
spec = {
"names": [ "line_name", "point_number", "easting", "northing" ],
"widths": [ -1, 10, 10, -25, 10, 10 ],
"normalisers": [ int, int, float, float ]
}
else:
normaliser_tokens = [ "int", "float", "str", "bool" ]
spec["normalisers"] = [ eval(t) for t in spec["types"] if t in normaliser_tokens ]
records = []
with open(path) as fd:
cnt = 0
line = fd.readline()
while line:
cnt = cnt+1
if line == "EOF":
break
record = parse_line(line, spec)
if record is not None:
records.append(record)
line = fd.readline()
del spec["normalisers"]
return records

View File

@@ -24,6 +24,7 @@ locals().update(configuration.vars())
exportables = {
"public": {
"projects": [ "meta" ],
"info": None,
"real_time_inputs": None
},
"survey": {
@@ -32,12 +33,13 @@ exportables = {
"preplot_lines": [ "remarks", "ntba", "meta" ],
"preplot_points": [ "ntba", "meta" ],
"raw_lines": [ "remarks", "meta" ],
"raw_shots": [ "meta" ]
"raw_shots": [ "meta" ],
"planned_lines": None
}
}
def primary_key (table, cursor):
# https://wiki.postgresql.org/wiki/Retrieve_primary_key_columns
qry = """
SELECT a.attname, format_type(a.atttypid, a.atttypmod) AS data_type
@@ -48,7 +50,7 @@ def primary_key (table, cursor):
WHERE i.indrelid = %s::regclass
AND i.indisprimary;
"""
cursor.execute(qry, (table,))
return cursor.fetchall()

View File

@@ -40,6 +40,10 @@ if __name__ == '__main__':
continue
try:
for table in exportables:
path = os.path.join(pathPrefix, table)
if os.path.exists(path):
cursor.execute(f"DELETE FROM {table};")
for table in exportables:
path = os.path.join(pathPrefix, table)
print("", path, "", table)

View File

@@ -19,6 +19,7 @@ locals().update(configuration.vars())
exportables = {
"public": {
"projects": [ "meta" ],
"info": None,
"real_time_inputs": None
},
"survey": {
@@ -27,12 +28,13 @@ exportables = {
"preplot_lines": [ "remarks", "ntba", "meta" ],
"preplot_points": [ "ntba", "meta" ],
"raw_lines": [ "remarks", "meta" ],
"raw_shots": [ "meta" ]
"raw_shots": [ "meta" ],
"planned_lines": None
}
}
def primary_key (table, cursor):
# https://wiki.postgresql.org/wiki/Retrieve_primary_key_columns
qry = """
SELECT a.attname, format_type(a.atttypid, a.atttypmod) AS data_type
@@ -43,13 +45,13 @@ def primary_key (table, cursor):
WHERE i.indrelid = %s::regclass
AND i.indisprimary;
"""
cursor.execute(qry, (table,))
return cursor.fetchall()
def import_table(fd, table, columns, cursor):
pk = [ r[0] for r in primary_key(table, cursor) ]
# Create temporary table to import into
temptable = "import_"+table
print("Creating temporary table", temptable)
@@ -59,29 +61,29 @@ def import_table(fd, table, columns, cursor):
AS SELECT {', '.join(pk + columns)} FROM {table}
WITH NO DATA;
"""
#print(qry)
cursor.execute(qry)
# Import into the temp table
print("Import data into temporary table")
cursor.copy_from(fd, temptable)
# Update the destination table
print("Updating destination table")
setcols = ", ".join([ f"{c} = t.{c}" for c in columns ])
wherecols = " AND ".join([ f"{table}.{c} = t.{c}" for c in pk ])
qry = f"""
UPDATE {table}
SET {setcols}
FROM {temptable} t
WHERE {wherecols};
"""
#print(qry)
cursor.execute(qry)
if __name__ == '__main__':
@@ -109,7 +111,7 @@ if __name__ == '__main__':
print(f"It looks like table {table} may have already been imported. Skipping it.")
except FileNotFoundError:
print(f"File not found. Skipping {path}")
db.conn.commit()
print("Reading surveys")
@@ -128,7 +130,7 @@ if __name__ == '__main__':
columns = exportables["survey"][table]
path = os.path.join(pathPrefix, "-"+table)
print(" ←← ", path, " →→ ", table, columns)
try:
with open(path, "rb") as fd:
if columns is not None:
@@ -141,7 +143,7 @@ if __name__ == '__main__':
print(f"It looks like table {table} may have already been imported. Skipping it.")
except FileNotFoundError:
print(f"File not found. Skipping {path}")
# If we don't commit the data does not actually get copied
db.conn.commit()

60
bin/update_comparisons.js Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/node
const cmp = require('../lib/www/server/lib/comparisons');
async function main () {
console.log("Retrieving project groups");
const groups = await cmp.groups();
if (!Object.keys(groups??{})?.length) {
console.log("No groups found");
return 0;
}
console.log(`Found ${groups.length} groups: ${Object.keys(groups).join(", ")}`);
for (const groupName of Object.keys(groups)) {
const projects = groups[groupName];
console.log(`Fetching saved comparisons for ${groupName}`);
const comparisons = await cmp.getGroup(groupName);
// Check if there are any projects that have been modified since last comparison
// or if there are any pairs that are no longer part of the group
const outdated = comparisons.filter( c => {
const baseline_tstamp = projects.find( p => p.pid === c.baseline_pid )?.tstamp;
const monitor_tstamp = projects.find( p => p.pid === c.monitor_pid )?.tstamp;
return (c.tstamp < baseline_tstamp) || (c.tstamp < monitor_tstamp) ||
baseline_tstamp == null || monitor_tstamp == null;
});
for (const comparison of outdated) {
console.log(`Removing stale comparison: ${comparison.baseline_pid}${comparison.monitor_pid}`);
await cmp.remove(comparison.baseline_pid, comparison.monitor_pid);
}
if (projects?.length < 2) {
console.log(`Group ${groupName} has less than two projects. No comparisons are possible`);
continue;
}
// Re-run the comparisons that are not in the database. They may
// be missing either beacause they were not there to start with
// or because we just removed them due to being stale
console.log(`Recalculating group ${groupName}`);
await cmp.saveGroup(groupName);
}
console.log("Comparisons update done");
return 0;
}
if (require.main === module) {
main();
} else {
module.exports = main;
}

65
etc/config.example.yaml Normal file
View File

@@ -0,0 +1,65 @@
db:
connection_string: "host=localhost port=5432 dbname=dougal user=postgres"
webhooks:
alert:
url: https://gitlab.com/wgp/dougal/software/alerts/notify.json
authkey: ""
# The authorisation key can be provided here or read from the
# environment variable GITLAB_ALERTS_AUTHKEY. The environment
# variable has precedence. It can be saved under the user's
# Bash .profile. This is the recommended way to avoid accidentally
# committing a security token into the git repository.
navigation:
headers:
-
type: udp
port: 30000
meta:
# Anything here gets passed as options to the packet
# saving routine.
epsg: 23031 # Assume this CRS for unqualified E/N data
# Heuristics to apply to detect survey when offline
offline_survey_heuristics: "nearest_preplot"
# Apply the heuristics at most once every…
offline_survey_detect_interval: 10000 # ms
imports:
# For a file to be imported, it must have been last modified at
# least this many seconds ago.
file_min_age: 60
# These paths refer to remote mounts which must be present in order
# for imports to work. If any of these paths are empty, import actions
# (including data deletion) will be inhibited. This is to cope with
# things like transient network failures.
mounts:
- /srv/mnt/Data
# These paths can be exposed to end users via the API. They should
# contain the locations were project data, or any other user data
# that needs to be accessible by Dougal, is located.
#
# This key can be either a string or an object:
# - If a string, it points to the root path for Dougal-accessible data.
# - If an object, there is an implicit root and the first-level
# paths are denoted by the keys, with the values being their
# respective physical paths.
# Non-absolute paths are relative to $DOUGAL_ROOT.
paths: /srv/mnt/Data
queues:
asaqc:
request:
url: "https://api.gateway.equinor.com/vt/v1/api/upload-file-encoded"
args:
method: POST
headers:
Content-Type: application/json
httpsAgent: # The paths here are relative to $DOUGAL_ROOT
cert: etc/ssl/asaqc.crt
key: etc/ssl/asaqc.key

View File

@@ -1,34 +0,0 @@
db:
connection_string: "host=localhost port=5432 dbname=dougal user=postgres"
webhooks:
alert:
url: https://gitlab.com/wgp/dougal/software/alerts/notify.json
authkey: ""
# The authorisation key can be provided here or read from the
# environment variable GITLAB_ALERTS_AUTHKEY. The environment
# variable has precedence. It can be saved under the user's
# Bash .profile. This is the recommended way to avoid accidentally
# committing a security token into the git repository.
navigation:
headers:
-
type: udp
port: 30000
meta:
# Anything here gets passed as options to the packet
# saving routine.
epsg: 23031 # Assume this CRS for unqualified E/N data
# Heuristics to apply to detect survey when offline
offline_survey_heuristics: "nearest_preplot"
# Apply the heuristics at most once every…
offline_survey_detect_interval: 10000 # ms
imports:
# For a file to be imported, it must have been last modified at
# least this many seconds ago.
file_min_age: 60

View File

@@ -19,3 +19,124 @@ Created with:
```bash
SCHEMA_NAME=survey_X EPSG_CODE=XXXXX $DOUGAL_ROOT/sbin/dump_schema.sh
```
## To create a new Dougal database
Ensure that the following packages are installed:
* `postgresql*-postgis-utils`
* `postgresql*-postgis`
* `postgresql*-contrib` # For B-trees
```bash
psql -U postgres <./database-template.sql
psql -U postgres <./database-version.sql
```
---
# Upgrading PostgreSQL
The following is based on https://en.opensuse.org/SDB:PostgreSQL#Upgrading_major_PostgreSQL_version
```bash
# The following bash code should be checked and executed
# line for line whenever you do an upgrade. The example
# shows the upgrade process from an original installation
# of version 12 up to version 14.
# install the new server as well as the required postgresql-contrib packages:
zypper in postgresql14-server postgresql14-contrib postgresql12-contrib
# If not yet done, copy the configuration create a new PostgreSQL configuration directory...
mkdir /etc/postgresql
# and copy the original file to this global directory
cd /srv/pgsql/data
for i in pg_hba.conf pg_ident.conf postgresql.conf postgresql.auto.conf ; do cp -a $i /etc/postgresql/$i ; done
# Now create a new data-directory and initialize it for usage with the new server
install -d -m 0700 -o postgres -g postgres /srv/pgsql/data14
cd /srv/pgsql/data14
sudo -u postgres /usr/lib/postgresql14/bin/initdb .
# replace the newly generated files by a symlink to the global files.
# After doing so, you may check the difference of the created backup files and
# the files from the former installation
for i in pg_hba.conf pg_ident.conf postgresql.conf postgresql.auto.conf ; do old $i ; ln -s /etc/postgresql/$i .; done
# Copy over special thesaurus files if some exists.
#cp -a /usr/share/postgresql12/tsearch_data/my_thesaurus_german.ths /usr/share/postgresql14/tsearch_data/
# Now it's time to disable the service...
systemctl stop postgresql.service
# And to start the migration. Please ensure, the directories fit to your upgrade path
sudo -u postgres /usr/lib/postgresql14/bin/pg_upgrade --link \
--old-bindir="/usr/lib/postgresql12/bin" \
--new-bindir="/usr/lib/postgresql14/bin" \
--old-datadir="/srv/pgsql/data/" \
--new-datadir="/srv/pgsql/data14/"
# NOTE: If getting the following error:
# lc_collate values for database "postgres" do not match: old "en_US.UTF-8", new "C"
# then:
# cd ..
# rm -rf /srv/pgsql/data14
# install -d -m 0700 -o postgres -g postgres /srv/pgsql/data14
# cd /srv/pgsql/data14
# sudo -u postgres /usr/lib/postgresql14/bin/initdb --locale=en_US.UTF-8 .
#
# and repeat the migration command
# After successfully migrating the data...
cd ..
# if not already symlinked move the old data to a versioned directory matching
# your old installation...
mv data data12
# and set a symlink to the new data directory
ln -sf data14/ data
# Now start the new service
systemctl start postgresql.service
# If everything has been sucessful, you should uninstall old packages...
#zypper rm -u postgresql12 postgresql13
# and remove old data directories
#rm -rf /srv/pgsql/data_OLD_POSTGRES_VERSION_NUMBER
# For good measure:
sudo -u postgres /usr/lib/postgresql14/bin/vacuumdb --all --analyze-in-stages
# If update_extensions.sql exists, apply it.
```
# Restoring from backup
## Whole database
Ensure that nothing is connected to the database.
```bash
psql -U postgres --dbname postgres <<EOF
-- Database: dougal
DROP DATABASE IF EXISTS dougal;
CREATE DATABASE dougal
WITH
OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'en_GB.UTF-8'
LC_CTYPE = 'en_GB.UTF-8'
TABLESPACE = pg_default
CONNECTION LIMIT = -1;
ALTER DATABASE dougal
SET search_path TO "$user", public, topology;
EOF
# Adjust --jobs according to host machine
pg_restore -U postgres --dbname dougal --clean --if-exists --jobs 32 /path/to/backup
```

View File

@@ -2,8 +2,8 @@
-- PostgreSQL database dump
--
-- Dumped from database version 12.4
-- Dumped by pg_dump version 12.4
-- Dumped from database version 14.2
-- Dumped by pg_dump version 14.2
SET statement_timeout = 0;
SET lock_timeout = 0;
@@ -102,20 +102,6 @@ CREATE EXTENSION IF NOT EXISTS postgis WITH SCHEMA public;
COMMENT ON EXTENSION postgis IS 'PostGIS geometry, geography, and raster spatial types and functions';
--
-- Name: postgis_raster; Type: EXTENSION; Schema: -; Owner: -
--
CREATE EXTENSION IF NOT EXISTS postgis_raster WITH SCHEMA public;
--
-- Name: EXTENSION postgis_raster; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION postgis_raster IS 'PostGIS raster types and functions';
--
-- Name: postgis_sfcgal; Type: EXTENSION; Schema: -; Owner: -
--
@@ -144,6 +130,221 @@ CREATE EXTENSION IF NOT EXISTS postgis_topology WITH SCHEMA topology;
COMMENT ON EXTENSION postgis_topology IS 'PostGIS topology spatial types and functions';
--
-- Name: queue_item_status; Type: TYPE; Schema: public; Owner: postgres
--
CREATE TYPE public.queue_item_status AS ENUM (
'queued',
'cancelled',
'failed',
'sent'
);
ALTER TYPE public.queue_item_status OWNER TO postgres;
--
-- Name: event_meta(timestamp with time zone); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.event_meta(tstamp timestamp with time zone) RETURNS jsonb
LANGUAGE plpgsql
AS $$
BEGIN
RETURN event_meta(tstamp, NULL, NULL);
END;
$$;
ALTER FUNCTION public.event_meta(tstamp timestamp with time zone) OWNER TO postgres;
--
-- Name: FUNCTION event_meta(tstamp timestamp with time zone); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.event_meta(tstamp timestamp with time zone) IS 'Overload of event_meta (timestamptz, integer, integer) for use when searching by timestamp.';
--
-- Name: event_meta(integer, integer); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.event_meta(sequence integer, point integer) RETURNS jsonb
LANGUAGE plpgsql
AS $$
BEGIN
RETURN event_meta(NULL, sequence, point);
END;
$$;
ALTER FUNCTION public.event_meta(sequence integer, point integer) OWNER TO postgres;
--
-- Name: FUNCTION event_meta(sequence integer, point integer); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.event_meta(sequence integer, point integer) IS 'Overload of event_meta (timestamptz, integer, integer) for use when searching by sequence / point.';
--
-- Name: event_meta(timestamp with time zone, integer, integer); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.event_meta(tstamp timestamp with time zone, sequence integer, point integer) RETURNS jsonb
LANGUAGE plpgsql
AS $$
DECLARE
result jsonb;
-- Tolerance is hard-coded, at least until a need to expose arises.
tolerance numeric;
BEGIN
tolerance := 3; -- seconds
-- We search by timestamp if we can, as that's a lot quicker
IF tstamp IS NOT NULL THEN
SELECT meta
INTO result
FROM real_time_inputs rti
WHERE
rti.tstamp BETWEEN (event_meta.tstamp - tolerance * interval '1 second') AND (event_meta.tstamp + tolerance * interval '1 second')
ORDER BY abs(extract('epoch' FROM rti.tstamp - event_meta.tstamp ))
LIMIT 1;
ELSE
SELECT meta
INTO result
FROM real_time_inputs rti
WHERE
(meta->>'_sequence')::integer = event_meta.sequence AND
(meta->>'_point')::integer = event_meta.point
ORDER BY rti.tstamp DESC
LIMIT 1;
END IF;
RETURN result;
END;
$$;
ALTER FUNCTION public.event_meta(tstamp timestamp with time zone, sequence integer, point integer) OWNER TO postgres;
--
-- Name: FUNCTION event_meta(tstamp timestamp with time zone, sequence integer, point integer); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.event_meta(tstamp timestamp with time zone, sequence integer, point integer) IS 'Return the real-time event metadata associated with a sequence / point in the current project or
with a given timestamp. Timestamp that is first searched for in the shot tables
of the current prospect or, if not found, in the real-time data.
Returns a JSONB object.';
--
-- Name: geometry_from_tstamp(timestamp with time zone, numeric); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.geometry_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT geometry public.geometry, OUT delta numeric) RETURNS record
LANGUAGE sql
AS $$
SELECT
geometry,
extract('epoch' FROM tstamp - ts ) AS delta
FROM real_time_inputs
WHERE
geometry IS NOT NULL AND
tstamp BETWEEN (ts - tolerance * interval '1 second') AND (ts + tolerance * interval '1 second')
ORDER BY abs(extract('epoch' FROM tstamp - ts ))
LIMIT 1;
$$;
ALTER FUNCTION public.geometry_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT geometry public.geometry, OUT delta numeric) OWNER TO postgres;
--
-- Name: FUNCTION geometry_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT geometry public.geometry, OUT delta numeric); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.geometry_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT geometry public.geometry, OUT delta numeric) IS 'Get geometry from timestamp';
--
-- Name: interpolate_geometry_from_tstamp(timestamp with time zone, numeric); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.interpolate_geometry_from_tstamp(ts timestamp with time zone, maxspan numeric) RETURNS public.geometry
LANGUAGE plpgsql
AS $$
DECLARE
ts0 timestamptz;
ts1 timestamptz;
geom0 geometry;
geom1 geometry;
span numeric;
fraction numeric;
BEGIN
SELECT tstamp, geometry
INTO ts0, geom0
FROM real_time_inputs
WHERE tstamp <= ts
ORDER BY tstamp DESC
LIMIT 1;
SELECT tstamp, geometry
INTO ts1, geom1
FROM real_time_inputs
WHERE tstamp >= ts
ORDER BY tstamp ASC
LIMIT 1;
IF geom0 IS NULL OR geom1 IS NULL THEN
RAISE NOTICE 'Interpolation failed (no straddling data)';
RETURN NULL;
END IF;
-- See if we got an exact match
IF ts0 = ts THEN
RETURN geom0;
ELSIF ts1 = ts THEN
RETURN geom1;
END IF;
span := extract('epoch' FROM ts1 - ts0);
IF span > maxspan THEN
RAISE NOTICE 'Interpolation timespan % outside maximum requested (%)', span, maxspan;
RETURN NULL;
END IF;
fraction := extract('epoch' FROM ts - ts0) / span;
IF fraction < 0 OR fraction > 1 THEN
RAISE NOTICE 'Requested timestamp % outside of interpolation span (fraction: %)', ts, fraction;
RETURN NULL;
END IF;
RETURN ST_LineInterpolatePoint(St_MakeLine(geom0, geom1), fraction);
END;
$$;
ALTER FUNCTION public.interpolate_geometry_from_tstamp(ts timestamp with time zone, maxspan numeric) OWNER TO postgres;
--
-- Name: FUNCTION interpolate_geometry_from_tstamp(ts timestamp with time zone, maxspan numeric); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.interpolate_geometry_from_tstamp(ts timestamp with time zone, maxspan numeric) IS 'Interpolate a position over a given maximum timespan (in seconds)
based on real-time inputs. Returns a POINT geometry.';
--
-- Name: notify(); Type: FUNCTION; Schema: public; Owner: postgres
--
@@ -182,23 +383,110 @@ $$;
ALTER FUNCTION public.notify() OWNER TO postgres;
--
-- Name: sequence_shot_from_tstamp(timestamp with time zone); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, OUT sequence numeric, OUT point numeric, OUT delta numeric) RETURNS record
LANGUAGE sql
AS $$
SELECT * FROM public.sequence_shot_from_tstamp(ts, 3);
$$;
ALTER FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, OUT sequence numeric, OUT point numeric, OUT delta numeric) OWNER TO postgres;
--
-- Name: FUNCTION sequence_shot_from_tstamp(ts timestamp with time zone, OUT sequence numeric, OUT point numeric, OUT delta numeric); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, OUT sequence numeric, OUT point numeric, OUT delta numeric) IS 'Get sequence and shotpoint from timestamp.
Overloaded form in which the tolerance value is implied and defaults to three seconds.';
--
-- Name: sequence_shot_from_tstamp(timestamp with time zone, numeric); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT sequence numeric, OUT point numeric, OUT delta numeric) RETURNS record
LANGUAGE sql
AS $$
SELECT
(meta->>'_sequence')::numeric AS sequence,
(meta->>'_point')::numeric AS point,
extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ) AS delta
FROM real_time_inputs
WHERE
meta ? '_sequence' AND
abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts )) < tolerance
ORDER BY abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ))
LIMIT 1;
$$;
ALTER FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT sequence numeric, OUT point numeric, OUT delta numeric) OWNER TO postgres;
--
-- Name: FUNCTION sequence_shot_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT sequence numeric, OUT point numeric, OUT delta numeric); Type: COMMENT; Schema: public; Owner: postgres
--
COMMENT ON FUNCTION public.sequence_shot_from_tstamp(ts timestamp with time zone, tolerance numeric, OUT sequence numeric, OUT point numeric, OUT delta numeric) IS 'Get sequence and shotpoint from timestamp.
Given a timestamp this function returns the closest shot to it within the given tolerance value.
This uses the `real_time_inputs` table and it does not give an indication of which project the shotpoint belongs to. It is assumed that a single project is being acquired at a given time.';
--
-- Name: set_survey(text); Type: PROCEDURE; Schema: public; Owner: postgres
--
CREATE PROCEDURE public.set_survey(project_id text)
CREATE PROCEDURE public.set_survey(IN project_id text)
LANGUAGE sql
AS $$
SELECT set_config('search_path', (SELECT schema||',public' FROM public.projects WHERE pid = lower(project_id)), false);
$$;
ALTER PROCEDURE public.set_survey(project_id text) OWNER TO postgres;
ALTER PROCEDURE public.set_survey(IN project_id text) OWNER TO postgres;
--
-- Name: update_timestamp(); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION public.update_timestamp() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
IF NEW.updated_on IS NOT NULL THEN
NEW.updated_on := current_timestamp;
END IF;
RETURN NEW;
EXCEPTION
WHEN undefined_column THEN RETURN NEW;
END;
$$;
ALTER FUNCTION public.update_timestamp() OWNER TO postgres;
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: info; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE public.info (
key text NOT NULL,
value jsonb
);
ALTER TABLE public.info OWNER TO postgres;
--
-- Name: projects; Type: TABLE; Schema: public; Owner: postgres
--
@@ -213,6 +501,46 @@ CREATE TABLE public.projects (
ALTER TABLE public.projects OWNER TO postgres;
--
-- Name: queue_items; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE public.queue_items (
item_id integer NOT NULL,
status public.queue_item_status DEFAULT 'queued'::public.queue_item_status NOT NULL,
payload jsonb NOT NULL,
results jsonb DEFAULT '{}'::jsonb NOT NULL,
created_on timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL,
updated_on timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL,
not_before timestamp with time zone DEFAULT '1970-01-01 00:00:00+00'::timestamp with time zone NOT NULL,
parent_id integer
);
ALTER TABLE public.queue_items OWNER TO postgres;
--
-- Name: queue_items_item_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres
--
CREATE SEQUENCE public.queue_items_item_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.queue_items_item_id_seq OWNER TO postgres;
--
-- Name: queue_items_item_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres
--
ALTER SEQUENCE public.queue_items_item_id_seq OWNED BY public.queue_items.item_id;
--
-- Name: real_time_inputs; Type: TABLE; Schema: public; Owner: postgres
--
@@ -226,6 +554,21 @@ CREATE TABLE public.real_time_inputs (
ALTER TABLE public.real_time_inputs OWNER TO postgres;
--
-- Name: queue_items item_id; Type: DEFAULT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY public.queue_items ALTER COLUMN item_id SET DEFAULT nextval('public.queue_items_item_id_seq'::regclass);
--
-- Name: info info_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY public.info
ADD CONSTRAINT info_pkey PRIMARY KEY (key);
--
-- Name: projects projects_name_key; Type: CONSTRAINT; Schema: public; Owner: postgres
--
@@ -250,6 +593,14 @@ ALTER TABLE ONLY public.projects
ADD CONSTRAINT projects_schema_key UNIQUE (schema);
--
-- Name: queue_items queue_items_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY public.queue_items
ADD CONSTRAINT queue_items_pkey PRIMARY KEY (item_id);
--
-- Name: tstamp_idx; Type: INDEX; Schema: public; Owner: postgres
--
@@ -257,6 +608,13 @@ ALTER TABLE ONLY public.projects
CREATE INDEX tstamp_idx ON public.real_time_inputs USING btree (tstamp DESC);
--
-- Name: info info_tg; Type: TRIGGER; Schema: public; Owner: postgres
--
CREATE TRIGGER info_tg AFTER INSERT OR DELETE OR UPDATE ON public.info FOR EACH ROW EXECUTE FUNCTION public.notify('info');
--
-- Name: projects projects_tg; Type: TRIGGER; Schema: public; Owner: postgres
--
@@ -264,6 +622,20 @@ CREATE INDEX tstamp_idx ON public.real_time_inputs USING btree (tstamp DESC);
CREATE TRIGGER projects_tg AFTER INSERT OR DELETE OR UPDATE ON public.projects FOR EACH ROW EXECUTE FUNCTION public.notify('project');
--
-- Name: queue_items queue_items_tg0; Type: TRIGGER; Schema: public; Owner: postgres
--
CREATE TRIGGER queue_items_tg0 BEFORE INSERT OR UPDATE ON public.queue_items FOR EACH ROW EXECUTE FUNCTION public.update_timestamp();
--
-- Name: queue_items queue_items_tg1; Type: TRIGGER; Schema: public; Owner: postgres
--
CREATE TRIGGER queue_items_tg1 AFTER INSERT OR DELETE OR UPDATE ON public.queue_items FOR EACH ROW EXECUTE FUNCTION public.notify('queue_items');
--
-- Name: real_time_inputs real_time_inputs_tg; Type: TRIGGER; Schema: public; Owner: postgres
--
@@ -271,6 +643,14 @@ CREATE TRIGGER projects_tg AFTER INSERT OR DELETE OR UPDATE ON public.projects F
CREATE TRIGGER real_time_inputs_tg AFTER INSERT ON public.real_time_inputs FOR EACH ROW EXECUTE FUNCTION public.notify('realtime');
--
-- Name: queue_items queue_items_parent_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY public.queue_items
ADD CONSTRAINT queue_items_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES public.queue_items(item_id);
--
-- PostgreSQL database dump complete
--

View File

@@ -0,0 +1,5 @@
\connect dougal
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.5"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.5"}' WHERE public.info.key = 'version';

File diff suppressed because it is too large Load Diff

34
etc/db/upgrades/README.md Normal file
View File

@@ -0,0 +1,34 @@
# Database schema upgrades
When the database schema needs to be upgraded in order to provide new functionality, fix errors, etc., an upgrade script should be added to this directory.
The script can be SQL (preferred) or anything else (Bash, Python, …) in the event of complex upgrades.
The script itself should:
* document what the intended changes are;
* contain instructions on how to run it;
* make the user aware of any non-obvious side effects; and
* say if it is safe to run the script multiple times on the
* same schema / database.
## Naming
Script files should be named `upgrade-<index>-<commit-id-old>-<commit-id-new>-v<schema-version>.sql`, where:
* `<index>` is a correlative two-digit index. When reaching 99, existing files will be renamed to a three digit index (001-099) and new files will use three digits.
* `<commit-id-old>` is the ID of the Git commit that last introduced a schema change.
* `<commit-id-new>` is the ID of the first Git commit expecting the updated schema.
* `<schema-version>` is the version of the schema.
Note: the `<schema-version>` value should be updated with every change and it should be the same as reported by:
```sql
select value->>'db_schema' as db_schema from public.info where key = 'version';
```
If necessary, the wanted schema version must also be updated in `package.json`.
## Running
Schema upgrades are always run manually.

View File

@@ -0,0 +1,22 @@
-- Upgrade the database from commit 78adb2be to 7917eeeb.
--
-- This upgrade affects the `public` schema only.
--
-- It creates a new table, `info`, for storing arbitrary JSON
-- data not belonging to a specific project. Currently used
-- for the equipment list, it could also serve to store user
-- details, configuration settings, system state, etc.
--
-- To apply, run as the dougal user:
--
-- psql < $THIS_FILE
--
-- NOTE: It will fail harmlessly if applied twice.
CREATE TABLE IF NOT EXISTS public.info (
key text NOT NULL primary key,
value jsonb
);
CREATE TRIGGER info_tg AFTER INSERT OR DELETE OR UPDATE ON public.info FOR EACH ROW EXECUTE FUNCTION public.notify('info');

View File

@@ -0,0 +1,160 @@
-- Upgrade the database from commit 6e7ba82e to 53f71f70.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This merges two changes to the database.
-- The first one (commit 5de64e6b) modifies the `event` view to return
-- the `meta` column of timed and sequence events.
-- The second one (commit 53f71f70) adds a primary key constraint to
-- events_seq_labels (there is already an equivalent constraint on
-- events_seq_timed).
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It will fail harmlessly if applied twice.
BEGIN;
DROP VIEW events_seq_timed CASCADE; -- Brings down events too
ALTER TABLE ONLY events_seq_labels
ADD CONSTRAINT events_seq_labels_pkey PRIMARY KEY (id, label);
CREATE OR REPLACE VIEW events_seq_timed AS
SELECT s.sequence,
s.point,
s.id,
s.remarks,
rs.line,
rs.objref,
rs.tstamp,
rs.hash,
s.meta,
rs.geometry
FROM (events_seq s
LEFT JOIN raw_shots rs USING (sequence, point));
CREATE OR REPLACE VIEW events AS
WITH qc AS (
SELECT rs.sequence,
rs.point,
ARRAY[jsonb_array_elements_text(q.labels)] AS labels
FROM raw_shots rs,
LATERAL jsonb_path_query(rs.meta, '$."qc".*."labels"'::jsonpath) q(labels)
)
SELECT 'sequence'::text AS type,
false AS virtual,
s.sequence,
s.point,
s.id,
s.remarks,
s.line,
s.objref,
s.tstamp,
s.hash,
s.meta,
(public.st_asgeojson(public.st_transform(s.geometry, 4326)))::jsonb AS geometry,
ARRAY( SELECT esl.label
FROM events_seq_labels esl
WHERE (esl.id = s.id)) AS labels
FROM events_seq_timed s
UNION
SELECT 'timed'::text AS type,
false AS virtual,
rs.sequence,
rs.point,
t.id,
t.remarks,
rs.line,
rs.objref,
t.tstamp,
rs.hash,
t.meta,
(t.meta -> 'geometry'::text) AS geometry,
ARRAY( SELECT etl.label
FROM events_timed_labels etl
WHERE (etl.id = t.id)) AS labels
FROM ((events_timed t
LEFT JOIN events_timed_seq ts USING (id))
LEFT JOIN raw_shots rs USING (sequence, point))
UNION
SELECT 'midnight shot'::text AS type,
true AS virtual,
v1.sequence,
v1.point,
((v1.sequence * 100000) + v1.point) AS id,
''::text AS remarks,
v1.line,
v1.objref,
v1.tstamp,
v1.hash,
'{}'::jsonb meta,
(public.st_asgeojson(public.st_transform(v1.geometry, 4326)))::jsonb AS geometry,
ARRAY[v1.label] AS labels
FROM events_midnight_shot v1
UNION
SELECT 'qc'::text AS type,
true AS virtual,
rs.sequence,
rs.point,
((10000000 + (rs.sequence * 100000)) + rs.point) AS id,
(q.remarks)::text AS remarks,
rs.line,
rs.objref,
rs.tstamp,
rs.hash,
'{}'::jsonb meta,
(public.st_asgeojson(public.st_transform(rs.geometry, 4326)))::jsonb AS geometry,
('{QC}'::text[] || qc.labels) AS labels
FROM (raw_shots rs
LEFT JOIN qc USING (sequence, point)),
LATERAL jsonb_path_query(rs.meta, '$."qc".*."results"'::jsonpath) q(remarks)
WHERE (rs.meta ? 'qc'::text);
CREATE OR REPLACE VIEW final_lines_summary AS
WITH summary AS (
SELECT DISTINCT fs.sequence,
first_value(fs.point) OVER w AS fsp,
last_value(fs.point) OVER w AS lsp,
first_value(fs.tstamp) OVER w AS ts0,
last_value(fs.tstamp) OVER w AS ts1,
count(fs.point) OVER w AS num_points,
public.st_distance(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) AS length,
((public.st_azimuth(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) * (180)::double precision) / pi()) AS azimuth
FROM final_shots fs
WINDOW w AS (PARTITION BY fs.sequence ORDER BY fs.tstamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT fl.sequence,
fl.line,
s.fsp,
s.lsp,
s.ts0,
s.ts1,
(s.ts1 - s.ts0) AS duration,
s.num_points,
(( SELECT count(*) AS count
FROM preplot_points
WHERE ((preplot_points.line = fl.line) AND (((preplot_points.point >= s.fsp) AND (preplot_points.point <= s.lsp)) OR ((preplot_points.point >= s.lsp) AND (preplot_points.point <= s.fsp))))) - s.num_points) AS missing_shots,
s.length,
s.azimuth,
fl.remarks,
fl.meta
FROM (summary s
JOIN final_lines fl USING (sequence));
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,171 @@
-- Upgrade the database from commit 53f71f70 to 4d977848.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adds:
--
-- * label_in_sequence (_sequence integer, _label text):
-- Returns events containing the specified label.
--
-- * handle_final_line_events (_seq integer, _label text, _column text):
-- - If _label does not exist in the events for sequence _seq:
-- it adds a new _label label at the shotpoint obtained from
-- final_lines_summary[_column].
-- - If _label does exist (and hasn't been auto-added by this function
-- in a previous run), it will add information about it to the final
-- line's metadata.
--
-- * final_line_post_import (_seq integer):
-- Calls handle_final_line_events() on the given sequence to check
-- for FSP, FGSP, LGSP and LSP labels.
--
-- * events_seq_labels_single ():
-- Trigger function to ensure that labels that have the attribute
-- `model.multiple` set to `false` occur at most only once per
-- sequence. If a new instance is added to a sequence, the previous
-- instance is deleted.
--
-- * Trigger on events_seq_labels that calls events_seq_labels_single().
--
-- * Trigger on events_timed_labels that calls events_seq_labels_single().
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It will fail harmlessly if applied twice.
BEGIN;
CREATE OR REPLACE FUNCTION label_in_sequence (_sequence integer, _label text)
RETURNS events
LANGUAGE sql
AS $$
SELECT * FROM events WHERE sequence = _sequence AND _label = ANY(labels);
$$;
CREATE OR REPLACE PROCEDURE handle_final_line_events (_seq integer, _label text, _column text)
LANGUAGE plpgsql
AS $$
DECLARE
_line final_lines_summary%ROWTYPE;
_column_value integer;
_tg_name text := 'final_line';
_event events%ROWTYPE;
event_id integer;
BEGIN
SELECT * INTO _line FROM final_lines_summary WHERE sequence = _seq;
_event := label_in_sequence(_seq, _label);
_column_value := row_to_json(_line)->>_column;
--RAISE NOTICE '% is %', _label, _event;
--RAISE NOTICE 'Line is %', _line;
--RAISE NOTICE '% is % (%)', _column, _column_value, _label;
IF _event IS NULL THEN
--RAISE NOTICE 'We will populate the event log from the sequence data';
SELECT id INTO event_id FROM events_seq WHERE sequence = _seq AND point = _column_value ORDER BY id LIMIT 1;
IF event_id IS NULL THEN
--RAISE NOTICE '… but there is no existing event so we create a new one for sequence % and point %', _line.sequence, _column_value;
INSERT INTO events_seq (sequence, point, remarks)
VALUES (_line.sequence, _column_value, format('%s %s', _label, (SELECT meta->>'lineName' FROM final_lines WHERE sequence = _seq)))
RETURNING id INTO event_id;
--RAISE NOTICE 'Created event_id %', event_id;
END IF;
--RAISE NOTICE 'Remove any other auto-inserted % labels in sequence %', _label, _seq;
DELETE FROM events_seq_labels
WHERE label = _label AND id = (SELECT id FROM events_seq WHERE sequence = _seq AND meta->'auto' ? _label);
--RAISE NOTICE 'We now add a label to the event (id, label) = (%, %)', event_id, _label;
INSERT INTO events_seq_labels (id, label) VALUES (event_id, _label) ON CONFLICT ON CONSTRAINT events_seq_labels_pkey DO NOTHING;
--RAISE NOTICE 'And also clear the %: % flag from meta.auto for any existing events for sequence %', _label, _tg_name, _seq;
UPDATE events_seq
SET meta = meta #- ARRAY['auto', _label]
WHERE meta->'auto' ? _label AND sequence = _seq AND id <> event_id;
--RAISE NOTICE 'Finally, flag the event as having been had label % auto-created by %', _label, _tg_name;
UPDATE events_seq
SET meta = jsonb_set(jsonb_set(meta, '{auto}', COALESCE(meta->'auto', '{}')), ARRAY['auto', _label], to_jsonb(_tg_name))
WHERE id = event_id;
ELSE
--RAISE NOTICE 'We may populate the sequence meta from the event log';
--RAISE NOTICE 'Unless the event log was populated by us previously';
--RAISE NOTICE 'Populated by us previously? %', _event.meta->'auto'->>_label = _tg_name;
IF _event.meta->'auto'->>_label IS DISTINCT FROM _tg_name THEN
--RAISE NOTICE 'Adding % found in events log to final_line meta', _label;
UPDATE final_lines
SET meta = jsonb_set(meta, ARRAY[_label], to_jsonb(_event.point))
WHERE sequence = _seq;
--RAISE NOTICE 'Clearing the %: % flag from meta.auto for any existing events in sequence %', _label, _tg_name, _seq;
UPDATE events_seq
SET meta = meta #- ARRAY['auto', _label]
WHERE sequence = _seq AND meta->'auto'->>_label = _tg_name;
END IF;
END IF;
END;
$$;
CREATE OR REPLACE PROCEDURE final_line_post_import (_seq integer)
LANGUAGE plpgsql
AS $$
BEGIN
CALL handle_final_line_events(_seq, 'FSP', 'fsp');
CALL handle_final_line_events(_seq, 'FGSP', 'fsp');
CALL handle_final_line_events(_seq, 'LGSP', 'lsp');
CALL handle_final_line_events(_seq, 'LSP', 'lsp');
END;
$$;
CREATE OR REPLACE FUNCTION events_seq_labels_single ()
RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE _sequence integer;
BEGIN
IF EXISTS(SELECT 1 FROM labels WHERE name = NEW.label AND (data->'model'->'multiple')::boolean IS FALSE) THEN
SELECT sequence INTO _sequence FROM events WHERE id = NEW.id;
DELETE
FROM events_seq_labels
WHERE
id <> NEW.id
AND label = NEW.label
AND id IN (SELECT id FROM events_seq WHERE sequence = _sequence);
DELETE
FROM events_timed_labels
WHERE
id <> NEW.id
AND label = NEW.label
AND id IN (SELECT id FROM events_timed_seq WHERE sequence = _sequence);
END IF;
RETURN NULL;
END;
$$;
CREATE TRIGGER events_seq_labels_single_tg AFTER INSERT OR UPDATE ON events_seq_labels FOR EACH ROW EXECUTE FUNCTION events_seq_labels_single();
CREATE TRIGGER events_seq_labels_single_tg AFTER INSERT OR UPDATE ON events_timed_labels FOR EACH ROW EXECUTE FUNCTION events_seq_labels_single();
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,94 @@
-- Upgrade the database from commit 4d977848 to 3d70a460.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adds the `meta` column to the output of the following views:
--
-- * raw_lines_summary; and
-- * sequences_summary
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
BEGIN;
CREATE OR REPLACE VIEW raw_lines_summary AS
WITH summary AS (
SELECT DISTINCT rs.sequence,
first_value(rs.point) OVER w AS fsp,
last_value(rs.point) OVER w AS lsp,
first_value(rs.tstamp) OVER w AS ts0,
last_value(rs.tstamp) OVER w AS ts1,
count(rs.point) OVER w AS num_points,
count(pp.point) OVER w AS num_preplots,
public.st_distance(first_value(rs.geometry) OVER w, last_value(rs.geometry) OVER w) AS length,
((public.st_azimuth(first_value(rs.geometry) OVER w, last_value(rs.geometry) OVER w) * (180)::double precision) / pi()) AS azimuth
FROM (raw_shots rs
LEFT JOIN preplot_points pp USING (line, point))
WINDOW w AS (PARTITION BY rs.sequence ORDER BY rs.tstamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT rl.sequence,
rl.line,
s.fsp,
s.lsp,
s.ts0,
s.ts1,
(s.ts1 - s.ts0) AS duration,
s.num_points,
s.num_preplots,
(( SELECT count(*) AS count
FROM preplot_points
WHERE ((preplot_points.line = rl.line) AND (((preplot_points.point >= s.fsp) AND (preplot_points.point <= s.lsp)) OR ((preplot_points.point >= s.lsp) AND (preplot_points.point <= s.fsp))))) - s.num_preplots) AS missing_shots,
s.length,
s.azimuth,
rl.remarks,
rl.ntbp,
rl.meta
FROM (summary s
JOIN raw_lines rl USING (sequence));
DROP VIEW sequences_summary;
CREATE OR REPLACE VIEW sequences_summary AS
SELECT rls.sequence,
rls.line,
rls.fsp,
rls.lsp,
fls.fsp AS fsp_final,
fls.lsp AS lsp_final,
rls.ts0,
rls.ts1,
fls.ts0 AS ts0_final,
fls.ts1 AS ts1_final,
rls.duration,
fls.duration AS duration_final,
rls.num_preplots,
COALESCE(fls.num_points, rls.num_points) AS num_points,
COALESCE(fls.missing_shots, rls.missing_shots) AS missing_shots,
COALESCE(fls.length, rls.length) AS length,
COALESCE(fls.azimuth, rls.azimuth) AS azimuth,
rls.remarks,
fls.remarks AS remarks_final,
rls.meta,
fls.meta AS meta_final,
CASE
WHEN (rls.ntbp IS TRUE) THEN 'ntbp'::text
WHEN (fls.sequence IS NULL) THEN 'raw'::text
ELSE 'final'::text
END AS status
FROM (raw_lines_summary rls
LEFT JOIN final_lines_summary fls USING (sequence));
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,33 @@
-- Upgrade the database from commit 3d70a460 to 0983abac.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This:
--
-- * makes the primary key on planned_lines deferrable; and
-- * changes the planned_lines trigger from statement to row.
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
BEGIN;
ALTER TABLE planned_lines DROP CONSTRAINT planned_lines_pkey;
ALTER TABLE planned_lines ADD CONSTRAINT planned_lines_pkey PRIMARY KEY (sequence) DEFERRABLE;
DROP TRIGGER planned_lines_tg ON planned_lines;
CREATE TRIGGER planned_lines_tg AFTER INSERT OR DELETE OR UPDATE ON planned_lines FOR EACH ROW EXECUTE FUNCTION public.notify('planned_lines');
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,207 @@
-- Upgrade the database from commit 0983abac to 81d9ea19.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This defines a new procedure adjust_planner() which resolves some
-- conflicts between shot sequences and the planner, such as removing
-- sequences that have been shot, renumbering, or adjusting the planned
-- times.
--
-- It is meant to be called at regular intervals by an external process,
-- such as the runner (software/bin/runner.sh).
--
-- A trigger for changes to the schema's `info` table is also added.
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
BEGIN;
CREATE OR REPLACE PROCEDURE adjust_planner ()
LANGUAGE plpgsql
AS $$
DECLARE
_planner_config jsonb;
_planned_line planned_lines%ROWTYPE;
_lag interval;
_last_sequence sequences_summary%ROWTYPE;
_deltatime interval;
_shotinterval interval;
_tstamp timestamptz;
_incr integer;
BEGIN
SET CONSTRAINTS planned_lines_pkey DEFERRED;
SELECT data->'planner'
INTO _planner_config
FROM file_data
WHERE data ? 'planner';
SELECT *
INTO _last_sequence
FROM sequences_summary
ORDER BY sequence DESC
LIMIT 1;
SELECT *
INTO _planned_line
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
SELECT
COALESCE(
((lead(ts0) OVER (ORDER BY sequence)) - ts1),
make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer)
)
INTO _lag
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
_incr = sign(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE '_planner_config: %', _planner_config;
RAISE NOTICE '_last_sequence: %', _last_sequence;
RAISE NOTICE '_planned_line: %', _planned_line;
RAISE NOTICE '_incr: %', _incr;
-- Does the latest sequence match a planned sequence?
IF _planned_line IS NULL THEN -- No it doesn't
RAISE NOTICE 'Latest sequence shot does not match a planned sequence';
SELECT * INTO _planned_line FROM planned_lines ORDER BY sequence ASC LIMIT 1;
RAISE NOTICE '_planned_line: %', _planned_line;
IF _planned_line.sequence <= _last_sequence.sequence THEN
RAISE NOTICE 'Renumbering the planned sequences starting from %', _planned_line.sequence + 1;
-- Renumber the planned sequences starting from last shot sequence number + 1
UPDATE planned_lines
SET sequence = sequence + _last_sequence.sequence - _planned_line.sequence + 1;
END IF;
-- The correction to make to the first planned line's ts0 will be based on either the last
-- sequence's EOL + default line change time or the current time, whichever is later.
_deltatime := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1) + make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer), current_timestamp) - _planned_line.ts0;
-- Is the first of the planned lines start time in the past? (±5 mins)
IF _planned_line.ts0 < (current_timestamp - make_interval(mins => 5)) THEN
RAISE NOTICE 'First planned line is in the past. Adjusting times by %', _deltatime;
-- Adjust the start / end time of the planned lines by assuming that we are at
-- `defaultLineChangeDuration` minutes away from SOL of the first planned line.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime;
END IF;
ELSE -- Yes it does
RAISE NOTICE 'Latest sequence does match a planned sequence: %, %', _planned_line.sequence, _planned_line.line;
-- Is it online?
IF EXISTS(SELECT 1 FROM raw_lines_files WHERE sequence = _last_sequence.sequence AND hash = '*online*') THEN
-- Yes it is
RAISE NOTICE 'Sequence % is online', _last_sequence.sequence;
-- Let us get the SOL from the events log if we can
RAISE NOTICE 'Trying to set fsp, ts0 from events log FSP, FGSP';
WITH e AS (
SELECT * FROM events
WHERE
sequence = _last_sequence.sequence
AND ('FSP' = ANY(labels) OR 'FGSP' = ANY(labels))
ORDER BY tstamp LIMIT 1
)
UPDATE planned_lines
SET
fsp = COALESCE(e.point, fsp),
ts0 = COALESCE(e.tstamp, ts0)
FROM e
WHERE planned_lines.sequence = _last_sequence.sequence;
-- Shot interval
_shotinterval := (_last_sequence.ts1 - _last_sequence.ts0) / abs(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE 'Estimating EOL from current shot interval: %', _shotinterval;
SELECT (abs(lsp-fsp) * _shotinterval + ts0) - ts1
INTO _deltatime
FROM planned_lines
WHERE sequence = _last_sequence.sequence;
---- Set ts1 for the current sequence
--UPDATE planned_lines
--SET
--ts1 = (abs(lsp-fsp) * _shotinterval) + ts0
--WHERE sequence = _last_sequence.sequence;
RAISE NOTICE 'Adjustment is %', _deltatime;
IF abs(EXTRACT(EPOCH FROM _deltatime)) < 8 THEN
RAISE NOTICE 'Adjustment too small (< 8 s), so not applying it';
RETURN;
END IF;
-- Adjust ts1 for the current sequence
UPDATE planned_lines
SET ts1 = ts1 + _deltatime
WHERE sequence = _last_sequence.sequence;
-- Now shift all sequences after
UPDATE planned_lines
SET ts0 = ts0 + _deltatime, ts1 = ts1 + _deltatime
WHERE sequence > _last_sequence.sequence;
RAISE NOTICE 'Deleting planned sequences before %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence < _last_sequence.sequence;
ELSE
-- No it isn't
RAISE NOTICE 'Sequence % is offline', _last_sequence.sequence;
-- We were supposed to finish at _planned_line.ts1 but we finished at:
_tstamp := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1), current_timestamp);
-- WARNING Next line is for testing only
--_tstamp := COALESCE(_last_sequence.ts1_final, _last_sequence.ts1);
-- So we need to adjust timestamps by:
_deltatime := _tstamp - _planned_line.ts1;
RAISE NOTICE 'Planned end: %, actual end: % (%, %)', _planned_line.ts1, _tstamp, _planned_line.sequence, _last_sequence.sequence;
RAISE NOTICE 'Shifting times by % for sequences > %', _deltatime, _planned_line.sequence;
-- NOTE: This won't work if sequences are not, err… sequential.
-- NOTE: This has been known to happen in 2020.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime
WHERE sequence > _planned_line.sequence;
RAISE NOTICE 'Deleting planned sequences up to %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence <= _last_sequence.sequence;
END IF;
END IF;
END;
$$;
DROP TRIGGER IF EXISTS info_tg ON info;
CREATE TRIGGER info_tg AFTER INSERT OR DELETE OR UPDATE ON info FOR EACH ROW EXECUTE FUNCTION public.notify('info');
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,91 @@
-- Upgrade the database from commit 81d9ea19 to 0a10c897.
--
-- NOTE: This upgrade must be applied to every schema in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This defines a new function ij_error(line, point, geometry) which
-- returns the crossline and inline distance (in metres) between the
-- geometry (which must be a point) and the preplot corresponding to
-- line / point.
--
-- To apply, run as the dougal user, for every schema in the database:
--
-- psql <<EOF
-- SET search_path TO survey_*,public;
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
BEGIN;
-- Return the crossline, inline error of `geom` with respect to `line` and `point`
-- in the project's binning grid.
CREATE OR REPLACE FUNCTION ij_error(line double precision, point double precision, geom public.geometry)
RETURNS public.geometry(Point, 0)
LANGUAGE plpgsql STABLE LEAKPROOF
AS $$
DECLARE
bp jsonb := binning_parameters();
ij public.geometry := to_binning_grid(geom, bp);
theta numeric := (bp->>'theta')::numeric * pi() / 180;
I_inc numeric DEFAULT 1;
J_inc numeric DEFAULT 1;
I_width numeric := (bp->>'I_width')::numeric;
J_width numeric := (bp->>'J_width')::numeric;
a numeric := (I_inc/I_width) * cos(theta);
b numeric := (I_inc/I_width) * -sin(theta);
c numeric := (J_inc/J_width) * sin(theta);
d numeric := (J_inc/J_width) * cos(theta);
xoff numeric := (bp->'origin'->>'I')::numeric;
yoff numeric := (bp->'origin'->>'J')::numeric;
E0 numeric := (bp->'origin'->>'easting')::numeric;
N0 numeric := (bp->'origin'->>'northing')::numeric;
error_i double precision;
error_j double precision;
BEGIN
error_i := (public.st_x(ij) - line) * I_width;
error_j := (public.st_y(ij) - point) * J_width;
RETURN public.ST_MakePoint(error_i, error_j);
END
$$;
-- Return the list of points and metadata for all sequences.
-- Only points which have a corresponding preplot are returned.
-- If available, final positions are returned as well, if not they
-- are NULL.
-- Likewise, crossline / inline errors are also returned as a PostGIS
-- 2D point both for raw and final data.
CREATE OR REPLACE VIEW sequences_detail AS
SELECT
rl.sequence, rl.line AS sailline,
rs.line, rs.point,
rs.tstamp,
rs.objref objRefRaw, fs.objref objRefFinal,
ST_Transform(pp.geometry, 4326) geometryPreplot,
ST_Transform(rs.geometry, 4326) geometryRaw,
ST_Transform(fs.geometry, 4326) geometryFinal,
ij_error(rs.line, rs.point, rs.geometry) errorRaw,
ij_error(rs.line, rs.point, fs.geometry) errorFinal,
json_build_object('preplot', pp.meta, 'raw', rs.meta, 'final', fs.meta) meta
FROM
raw_lines rl
INNER JOIN raw_shots rs USING (sequence)
INNER JOIN preplot_points pp ON rs.line = pp.line AND rs.point = pp.point
LEFT JOIN final_shots fs ON rl.sequence = fs.sequence AND rs.point = fs.point;
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,75 @@
-- Upgrade the database from commit 81d9ea19 to 74b3de5c.
--
-- This upgrade affects the `public` schema only.
--
-- It creates a new table, `queue_items`, for storing
-- requests and responses related to inter-API communication.
-- At the moment this means Equinor's ASAQC API, but it
-- should be applicable to others as well if the need
-- arises.
--
-- As well as the table, it adds:
--
-- * `queue_item_status`, an ENUM type.
-- * `update_timestamp`, a trigger function.
-- * Two triggers on `queue_items`.
--
-- To apply, run as the dougal user:
--
-- psql < $THIS_FILE
--
-- NOTE: It will fail harmlessly if applied twice.
-- Queues are global, not per project,
-- so they go in the `public` schema.
CREATE TYPE queue_item_status
AS ENUM (
'queued',
'cancelled',
'failed',
'sent'
);
CREATE TABLE IF NOT EXISTS queue_items (
item_id serial NOT NULL PRIMARY KEY,
-- One day we may want multiple queues, in that case we will
-- have a queue_id and a relation of queue definitions.
-- But not right now.
-- queue_id integer NOT NULL REFERENCES queues (queue_id),
status queue_item_status NOT NULL DEFAULT 'queued',
payload jsonb NOT NULL,
results jsonb NOT NULL DEFAULT '{}'::jsonb,
created_on timestamptz NOT NULL DEFAULT current_timestamp,
updated_on timestamptz NOT NULL DEFAULT current_timestamp,
not_before timestamptz NOT NULL DEFAULT '1970-01-01T00:00:00Z',
parent_id integer NULL REFERENCES queue_items (item_id)
);
-- Sets `updated_on` to current_timestamp unless an explicit
-- timestamp is part of the update.
--
-- This function can be reused with any table that has (or could have)
-- an `updated_on` column of time timestamptz.
CREATE OR REPLACE FUNCTION update_timestamp () RETURNS trigger AS
$$
BEGIN
IF NEW.updated_on IS NOT NULL THEN
NEW.updated_on := current_timestamp;
END IF;
RETURN NEW;
EXCEPTION
WHEN undefined_column THEN RETURN NEW;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER queue_items_tg0
BEFORE INSERT OR UPDATE ON public.queue_items
FOR EACH ROW EXECUTE FUNCTION public.update_timestamp();
CREATE TRIGGER queue_items_tg1
AFTER INSERT OR DELETE OR UPDATE ON public.queue_items
FOR EACH ROW EXECUTE FUNCTION public.notify('queue_items');

View File

@@ -0,0 +1,24 @@
-- Upgrade the database from commit 74b3de5c to commit 83be83e4.
--
-- NOTE: This upgrade only affects the `public` schema.
--
-- This inserts a database schema version into the database.
-- Note that we are not otherwise changing the schema, so older
-- server code will continue to run against this version.
--
-- ATTENTION!
--
-- This value should be incremented every time that the database
-- schema changes (either `public` or any of the survey schemas)
-- and is used by the server at start-up to detect if it is
-- running against a compatible schema version.
--
-- To apply, run as the dougal user:
--
-- psql < $THIS_FILE
--
-- NOTE: It can be applied multiple times without ill effect.
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.1.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.1.0"}' WHERE public.info.key = 'version';

View File

@@ -0,0 +1,84 @@
-- Upgrade the database from commit 83be83e4 to 53ed096e.
--
-- New schema version: 0.2.0
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This migrates the file hashes to address issue #173.
-- The new hashes use size, modification time, creation time and the
-- first half of the MD5 hex digest of the file's absolute path.
--
-- It's a minor (rather than patch) version number increment because
-- changes to `bin/datastore.py` mean that the data is no longer
-- compatible with the hashing function.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can take a while if run on a large database.
-- NOTE: It can be applied multiple times without ill effect.
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE migrate_hashes (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Migrating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
EXECUTE format('UPDATE %I.files SET hash = array_to_string(array_append(trim_array(string_to_array(hash, '':''), 1), left(md5(path), 16)), '':'')', schema_name);
EXECUTE 'SET search_path TO public'; -- Back to the default search path for good measure
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE upgrade_10 () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL migrate_hashes(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL upgrade_10();
CALL show_notice('Cleaning up');
DROP PROCEDURE migrate_hashes (schema_name text);
DROP PROCEDURE upgrade_10 ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.2.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.2.0"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,189 @@
-- Add function to retrieve sequence/shotpoint from timestamps and vice-versa
--
-- New schema version: 0.2.1
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects the public schema.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- Two new functions are defined:
--
-- sequence_shot_from_tstamp(tstamp, [tolerance]) → sequence, point, delta
--
-- Returns a sequence + shotpoint if one falls within `tolerance` seconds
-- of `tstamp`. The tolerance may be omitted in which case it defaults to
-- three seconds. If multiple values match, it returns the closest in time.
--
-- tstamp_from_sequence_shot(sequence, point) → tstamp
--
-- Returns a timestamp given a sequence and point number.
--
-- NOTE: This last function must be called from a search path including a
-- project schema, as it accesses the raw_shots table.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can take a while if run on a large database.
-- NOTE: It can be applied multiple times without ill effect.
-- NOTE: This will lock the database while the transaction is active.
--
-- WARNING: Applying this upgrade drops the old tables. Ensure that you
-- have migrated the data first.
--
-- NOTE: This is a patch version change so it does not require a
-- backend restart.
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION tstamp_from_sequence_shot(
IN s numeric,
IN p numeric,
OUT "ts" timestamptz)
AS $inner$
SELECT tstamp FROM raw_shots WHERE sequence = s AND point = p LIMIT 1;
$inner$ LANGUAGE SQL;
COMMENT ON FUNCTION tstamp_from_sequence_shot(numeric, numeric)
IS 'Get the timestamp of an existing shotpoint.';
CREATE OR REPLACE FUNCTION tstamp_interpolate(s numeric, p numeric) RETURNS timestamptz
AS $inner$
DECLARE
ts0 timestamptz;
ts1 timestamptz;
pt0 numeric;
pt1 numeric;
BEGIN
SELECT tstamp, point
INTO ts0, pt0
FROM raw_shots
WHERE sequence = s AND point < p
ORDER BY point DESC LIMIT 1;
SELECT tstamp, point
INTO ts1, pt1
FROM raw_shots
WHERE sequence = s AND point > p
ORDER BY point ASC LIMIT 1;
RETURN (ts1-ts0)/abs(pt1-pt0)*abs(p-pt0)+ts0;
END;
$inner$ LANGUAGE PLPGSQL;
COMMENT ON FUNCTION tstamp_interpolate(numeric, numeric)
IS 'Interpolate a timestamp given sequence and point values.
It will try to find the points immediately before and after in the sequence and interpolate into the gap, which may consist of multiple missed shots.
If called on an existing shotpoint it will return an interpolated timestamp as if the shotpoint did not exist, as opposed to returning its actual timestamp.
Returns NULL if it is not possible to interpolate.';
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $$
DECLARE
row RECORD;
BEGIN
CREATE OR REPLACE FUNCTION public.sequence_shot_from_tstamp(
IN ts timestamptz,
IN tolerance numeric,
OUT "sequence" numeric,
OUT "point" numeric,
OUT "delta" numeric)
AS $inner$
SELECT
(meta->>'_sequence')::numeric AS sequence,
(meta->>'_point')::numeric AS point,
extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ) AS delta
FROM real_time_inputs
WHERE
meta ? '_sequence' AND
abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts )) < tolerance
ORDER BY abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ))
LIMIT 1;
$inner$ LANGUAGE SQL;
COMMENT ON FUNCTION public.sequence_shot_from_tstamp(timestamptz, numeric)
IS 'Get sequence and shotpoint from timestamp.
Given a timestamp this function returns the closest shot to it within the given tolerance value.
This uses the `real_time_inputs` table and it does not give an indication of which project the shotpoint belongs to. It is assumed that a single project is being acquired at a given time.';
CREATE OR REPLACE FUNCTION public.sequence_shot_from_tstamp(
IN ts timestamptz,
OUT "sequence" numeric,
OUT "point" numeric,
OUT "delta" numeric)
AS $inner$
SELECT * FROM public.sequence_shot_from_tstamp(ts, 3);
$inner$ LANGUAGE SQL;
COMMENT ON FUNCTION public.sequence_shot_from_tstamp(timestamptz)
IS 'Get sequence and shotpoint from timestamp.
Overloaded form in which the tolerance value is implied and defaults to three seconds.';
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_database();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_database ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.2.1"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.2.1"}' WHERE public.info.key = 'version';
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,360 @@
-- Add new event log schema.
--
-- New schema version: 0.2.2
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
-- REQUIRES POSTGRESQL VERSION 14 OR NEWER
-- (Because of CREATE OR REPLACE TRIGGER)
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This is a redesign of the event logging mechanism. The old mechanism
-- relied on a distinction between sequence events (i.e., those which can
-- be associated to a shotpoint within a sequence), timed events (those
-- which occur outside any acquisition sequence) and so-called virtual
-- events (deduced from the data). It was inflexible and inefficient,
-- as most of the time we needed to merge those two types of events into
-- a single view.
--
-- The new mechanism:
-- - uses a single table
-- - accepts sequence event entries for shots or sequences which may not (yet)
-- exist. (https://gitlab.com/wgp/dougal/software/-/issues/170)
-- - keeps edit history (https://gitlab.com/wgp/dougal/software/-/issues/138)
-- - Keeps track of when an entry was made or subsequently edited.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can take a while if run on a large database.
-- NOTE: It can be applied multiple times without ill effect, as long
-- as the new tables did not previously exist. If they did, they will
-- be emptied before migrating the data.
--
-- WARNING: Applying this upgrade migrates the old event data. It does
-- NOT yet drop the old tables, which is handled in a separate script,
-- leaving the actions here technically reversible without having to
-- restore from backup.
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE SEQUENCE IF NOT EXISTS event_log_uid_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
CREATE TABLE IF NOT EXISTS event_log_full (
-- uid is a unique id for each entry in the table,
-- including revisions of an existing entry.
uid integer NOT NULL PRIMARY KEY DEFAULT nextval('event_log_uid_seq'),
-- All revisions of an entry share the same id.
-- If inserting a new entry, id = uid.
id integer NOT NULL,
-- No default tstamp because, for instance, a user could
-- enter a sequence/point event referring to the future.
-- An external process should scan those at regular intervals
-- and populate the tstamp as needed.
tstamp timestamptz NULL,
sequence integer NULL,
point integer NULL,
remarks text NOT NULL DEFAULT '',
labels text[] NOT NULL DEFAULT ARRAY[]::text[],
-- TODO: Need a geometry column? Let us check performance as it is
-- and if needed either add a geometry column + spatial index.
meta jsonb NOT NULL DEFAULT '{}'::jsonb,
validity tstzrange NOT NULL CHECK (NOT isempty(validity)),
-- We accept either:
-- - Just a tstamp
-- - Just a sequence / point pair
-- - All three
-- We don't accept:
-- - A sequence without a point or vice-versa
-- - Nothing being provided
CHECK (
(tstamp IS NOT NULL AND sequence IS NOT NULL AND point IS NOT NULL) OR
(tstamp IS NOT NULL AND sequence IS NULL AND point IS NULL) OR
(tstamp IS NULL AND sequence IS NOT NULL AND point IS NOT NULL)
)
);
CREATE INDEX IF NOT EXISTS event_log_id ON event_log_full USING btree (id);
CREATE OR REPLACE FUNCTION event_log_full_insert() RETURNS TRIGGER AS $inner$
BEGIN
NEW.id := COALESCE(NEW.id, NEW.uid);
NEW.validity := tstzrange(current_timestamp, NULL);
NEW.meta = COALESCE(NEW.meta, '{}'::jsonb);
NEW.labels = COALESCE(NEW.labels, ARRAY[]::text[]);
IF cardinality(NEW.labels) > 0 THEN
-- Remove duplicates
SELECT array_agg(DISTINCT elements)
INTO NEW.labels
FROM (SELECT unnest(NEW.labels) AS elements) AS labels;
END IF;
RETURN NEW;
END;
$inner$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER event_log_full_insert_tg
BEFORE INSERT ON event_log_full
FOR EACH ROW EXECUTE FUNCTION event_log_full_insert();
-- The public.notify() trigger to alert clients that something has changed
CREATE OR REPLACE TRIGGER event_log_full_notify_tg
AFTER INSERT OR DELETE OR UPDATE
ON event_log_full FOR EACH ROW EXECUTE FUNCTION public.notify('event');
--
-- VIEW event_log
--
-- This is what is exposed to the user most of the time.
-- It shows the current version of records in the event_log_full
-- table.
--
-- The user applies edits to this table directly, which are
-- processed via triggers.
--
CREATE OR REPLACE VIEW event_log AS
SELECT
id, tstamp, sequence, point, remarks, labels, meta,
uid <> id AS has_edits,
lower(validity) AS modified_on
FROM event_log_full
WHERE validity @> current_timestamp;
CREATE OR REPLACE FUNCTION event_log_update() RETURNS TRIGGER AS $inner$
BEGIN
IF (TG_OP = 'INSERT') THEN
-- Complete the tstamp if possible
IF NEW.sequence IS NOT NULL AND NEW.point IS NOT NULL AND NEW.tstamp IS NULL THEN
SELECT COALESCE(
tstamp_from_sequence_shot(NEW.sequence, NEW.point),
tstamp_interpolate(NEW.sequence, NEW.point)
)
INTO NEW.tstamp;
END IF;
-- Any id that is provided will be ignored. The generated
-- id will match uid.
INSERT INTO event_log_full
(tstamp, sequence, point, remarks, labels, meta)
VALUES (NEW.tstamp, NEW.sequence, NEW.point, NEW.remarks, NEW.labels, NEW.meta);
RETURN NEW;
ELSIF (TG_OP = 'UPDATE') THEN
-- Set end of validity and create a new entry with id
-- matching that of the old entry.
-- NOTE: Do not allow updating an event that has meta.readonly = true
IF EXISTS
(SELECT *
FROM event_log_full
WHERE id = OLD.id AND (meta->>'readonly')::boolean IS TRUE)
THEN
RAISE check_violation USING MESSAGE = 'Cannot modify read-only entry';
RETURN NULL;
END IF;
-- If the sequence / point has changed, and no new tstamp is provided, get one
IF NEW.sequence <> OLD.sequence OR NEW.point <> OLD.point
AND NEW.sequence IS NOT NULL AND NEW.point IS NOT NULL
AND NEW.tstamp IS NULL OR NEW.tstamp = OLD.tstamp THEN
SELECT COALESCE(
tstamp_from_sequence_shot(NEW.sequence, NEW.point),
tstamp_interpolate(NEW.sequence, NEW.point)
)
INTO NEW.tstamp;
END IF;
UPDATE event_log_full
SET validity = tstzrange(lower(validity), current_timestamp)
WHERE validity @> current_timestamp AND id = OLD.id;
-- Any attempt to modify id will be ignored.
INSERT INTO event_log_full
(id, tstamp, sequence, point, remarks, labels, meta)
VALUES (OLD.id, NEW.tstamp, NEW.sequence, NEW.point, NEW.remarks, NEW.labels, NEW.meta);
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
-- Set end of validity.
-- NOTE: We *do* allow deleting an event that has meta.readonly = true
-- This could be of interest if for instance we wanted to keep the history
-- of QC results for a point, provided that the QC routines write to
-- event_log and not event_log_full
UPDATE event_log_full
SET validity = tstzrange(lower(validity), current_timestamp)
WHERE validity @> current_timestamp AND id = OLD.id;
RETURN NULL;
END IF;
END;
$inner$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER event_log_tg
INSTEAD OF INSERT OR UPDATE OR DELETE ON event_log
FOR EACH ROW EXECUTE FUNCTION event_log_update();
-- NOTE
-- This is where we migrate the actual data
RAISE NOTICE 'Migrating schema %', schema_name;
-- We start by deleting any data that the new tables might
-- have had if they already existed.
DELETE FROM event_log_full;
-- We purposefully bypass event_log here, as the tables we're
-- migrating from only contain a single version of each event.
INSERT INTO event_log_full (tstamp, sequence, point, remarks, labels, meta)
SELECT
tstamp, sequence, point, remarks, labels,
meta || json_build_object('geometry', geometry, 'readonly', virtual)::jsonb
FROM events;
UPDATE event_log_full SET meta = meta - 'geometry' WHERE meta->>'geometry' IS NULL;
UPDATE event_log_full SET meta = meta - 'readonly' WHERE (meta->'readonly')::boolean IS false;
-- This function used the superseded `events` view.
-- We need to drop it because we're changing the return type.
DROP FUNCTION IF EXISTS label_in_sequence (_sequence integer, _label text);
CREATE OR REPLACE FUNCTION label_in_sequence (_sequence integer, _label text)
RETURNS event_log
LANGUAGE sql
AS $inner$
SELECT * FROM event_log WHERE sequence = _sequence AND _label = ANY(labels);
$inner$;
-- This function used the superseded `events` view (and a strange logic).
CREATE OR REPLACE PROCEDURE handle_final_line_events (_seq integer, _label text, _column text)
LANGUAGE plpgsql
AS $inner$
DECLARE
_line final_lines_summary%ROWTYPE;
_column_value integer;
_tg_name text := 'final_line';
_event event_log%ROWTYPE;
event_id integer;
BEGIN
SELECT * INTO _line FROM final_lines_summary WHERE sequence = _seq;
_event := label_in_sequence(_seq, _label);
_column_value := row_to_json(_line)->>_column;
--RAISE NOTICE '% is %', _label, _event;
--RAISE NOTICE 'Line is %', _line;
--RAISE NOTICE '% is % (%)', _column, _column_value, _label;
IF _event IS NULL THEN
--RAISE NOTICE 'We will populate the event log from the sequence data';
INSERT INTO event_log (sequence, point, remarks, labels, meta)
VALUES (
-- The sequence
_seq,
-- The shotpoint
_column_value,
-- Remark. Something like "FSP <linename>"
format('%s %s', _label, (SELECT meta->>'lineName' FROM final_lines WHERE sequence = _seq)),
-- Label
ARRAY[_label],
-- Meta. Something like {"auto" : {"FSP" : "final_line"}}
json_build_object('auto', json_build_object(_label, _tg_name))
);
ELSE
--RAISE NOTICE 'We may populate the sequence meta from the event log';
--RAISE NOTICE 'Unless the event log was populated by us previously';
--RAISE NOTICE 'Populated by us previously? %', _event.meta->'auto'->>_label = _tg_name;
IF _event.meta->'auto'->>_label IS DISTINCT FROM _tg_name THEN
--RAISE NOTICE 'Adding % found in events log to final_line meta', _label;
UPDATE final_lines
SET meta = jsonb_set(meta, ARRAY[_label], to_jsonb(_event.point))
WHERE sequence = _seq;
END IF;
END IF;
END;
$inner$;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_12 () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_12();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_12 ();
CALL show_notice('Updating db_schema version');
-- This is technically still compatible with 0.2.0 as we are only adding
-- some more tables and views but not yet dropping the old ones, which we
-- will do separately so that these scripts do not get too big.
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.2.2"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.2.2"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,98 @@
-- Migrate events to new schema
--
-- New schema version: 0.3.0
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This migrates the data from the old event log tables to the new schema.
-- It is a *very* good idea to review the data manually after the migration
-- as issues with the logs that had gone unnoticed may become evident now.
--
-- WARNING: If data exists in the new event tables, IT WILL BE TRUNCATED.
--
-- Other than that, this migration is fairly benign as it does not modify
-- the old data.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can take a while if run on a large database.
-- NOTE: It can be applied multiple times without ill effect.
-- NOTE: This will lock the new event tables while the transaction is active.
--
-- WARNING: This is a minor (not patch) version change, meaning that it requires
-- an upgrade and restart of the backend server.
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
TRUNCATE event_log_full;
-- NOTE: meta->>'virtual' = TRUE means that the event was created algorithmically
-- and should not be user editable.
INSERT INTO event_log_full (tstamp, sequence, point, remarks, labels, meta)
SELECT
tstamp, sequence, point, remarks, labels,
meta || json_build_object('geometry', geometry, 'readonly', virtual)::jsonb
FROM events;
-- We purposefully bypass event_log here
UPDATE event_log_full SET meta = meta - 'geometry' WHERE meta->>'geometry' IS NULL;
UPDATE event_log_full SET meta = meta - 'readonly' WHERE (meta->'readonly')::boolean IS false;
END
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_database();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_database ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.0"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,99 @@
-- Drop old event tables.
--
-- New schema version: 0.3.1
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This completes the migration from the old event logging mechanism by
-- DROPPING THE OLD DATABASE OBJECTS, MAKING THE MIGRATION IRREVERSIBLE,
-- other than by restoring from backup and manually transferring any new
-- data that may have been created in the meanwhile.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can take a while if run on a large database.
-- NOTE: It can be applied multiple times without ill effect.
-- NOTE: This will lock the database while the transaction is active.
--
-- WARNING: Applying this upgrade drops the old tables. Ensure that you
-- have migrated the data first.
--
-- NOTE: This is a patch version change so it does not require a
-- backend restart.
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
DROP FUNCTION IF EXISTS
label_in_sequence(integer,text), reset_events_serials();
DROP VIEW IF EXISTS
events_midnight_shot, events_seq_timed, events_labels, "events";
DROP TABLE IF EXISTS
events_seq_labels, events_timed_labels, events_timed_seq, events_seq, events_timed;
DROP SEQUENCE IF EXISTS
events_seq_id_seq, events_timed_id_seq;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_database();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_database ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.1"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.1"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,136 @@
-- Fix project_summary view.
--
-- New schema version: 0.3.2
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This fixes a problem with the project_summary view. In its common table
-- expression, the view definition tried to search public.projects based on
-- the search path value with the following expression:
--
-- (current_setting('search_path'::text) ~~ (p.schema || '%'::text))
--
-- That is of course bound to fail as soon as the schema goes above `survey_9`
-- because `survey_10 LIKE ('survey_1' || '%')` is TRUE.
--
-- The new mechanism relies on splitting the search_path.
--
-- NOTE: The survey schema needs to be the leftmost element in search_path.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW project_summary AS
WITH fls AS (
SELECT avg((final_lines_summary.duration / ((final_lines_summary.num_points - 1))::double precision)) AS shooting_rate,
avg((final_lines_summary.length / date_part('epoch'::text, final_lines_summary.duration))) AS speed,
sum(final_lines_summary.duration) AS prod_duration,
sum(final_lines_summary.length) AS prod_distance
FROM final_lines_summary
), project AS (
SELECT p.pid,
p.name,
p.schema
FROM public.projects p
WHERE (split_part(current_setting('search_path'::text), ','::text, 1) = p.schema)
)
SELECT project.pid,
project.name,
project.schema,
( SELECT count(*) AS count
FROM preplot_lines
WHERE (preplot_lines.class = 'V'::bpchar)) AS lines,
ps.total,
ps.virgin,
ps.prime,
ps.other,
ps.ntba,
ps.remaining,
( SELECT to_json(fs.*) AS to_json
FROM final_shots fs
ORDER BY fs.tstamp
LIMIT 1) AS fsp,
( SELECT to_json(fs.*) AS to_json
FROM final_shots fs
ORDER BY fs.tstamp DESC
LIMIT 1) AS lsp,
( SELECT count(*) AS count
FROM raw_lines rl) AS seq_raw,
( SELECT count(*) AS count
FROM final_lines rl) AS seq_final,
fls.prod_duration,
fls.prod_distance,
fls.speed AS shooting_rate
FROM preplot_summary ps,
fls,
project;
ALTER TABLE project_summary OWNER TO postgres;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_15 () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_15();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_15 ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.2"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.2"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,169 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.3
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- The event_log_update() function that gets called when trying to update
-- the event_log view will not work if the caller does provide a timestamp
-- or sequence + point in the list of fields to be updated. See:
-- https://gitlab.com/wgp/dougal/software/-/issues/198
--
-- This fixes the problem by liberally using COALESCE() to merge the OLD
-- and NEW records.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION event_log_update() RETURNS trigger
LANGUAGE plpgsql
AS $inner$
BEGIN
IF (TG_OP = 'INSERT') THEN
-- Complete the tstamp if possible
IF NEW.sequence IS NOT NULL AND NEW.point IS NOT NULL AND NEW.tstamp IS NULL THEN
SELECT COALESCE(
tstamp_from_sequence_shot(NEW.sequence, NEW.point),
tstamp_interpolate(NEW.sequence, NEW.point)
)
INTO NEW.tstamp;
END IF;
-- Any id that is provided will be ignored. The generated
-- id will match uid.
INSERT INTO event_log_full
(tstamp, sequence, point, remarks, labels, meta)
VALUES (NEW.tstamp, NEW.sequence, NEW.point, NEW.remarks, NEW.labels, NEW.meta);
RETURN NEW;
ELSIF (TG_OP = 'UPDATE') THEN
-- Set end of validity and create a new entry with id
-- matching that of the old entry.
-- NOTE: Do not allow updating an event that has meta.readonly = true
IF EXISTS
(SELECT *
FROM event_log_full
WHERE id = OLD.id AND (meta->>'readonly')::boolean IS TRUE)
THEN
RAISE check_violation USING MESSAGE = 'Cannot modify read-only entry';
RETURN NULL;
END IF;
-- If the sequence / point has changed, and no new tstamp is provided, get one
IF NEW.sequence <> OLD.sequence OR NEW.point <> OLD.point
AND NEW.sequence IS NOT NULL AND NEW.point IS NOT NULL
AND NEW.tstamp IS NULL OR NEW.tstamp = OLD.tstamp THEN
SELECT COALESCE(
tstamp_from_sequence_shot(NEW.sequence, NEW.point),
tstamp_interpolate(NEW.sequence, NEW.point)
)
INTO NEW.tstamp;
END IF;
UPDATE event_log_full
SET validity = tstzrange(lower(validity), current_timestamp)
WHERE validity @> current_timestamp AND id = OLD.id;
-- Any attempt to modify id will be ignored.
INSERT INTO event_log_full
(id, tstamp, sequence, point, remarks, labels, meta)
VALUES (
OLD.id,
COALESCE(NEW.tstamp, OLD.tstamp),
COALESCE(NEW.sequence, OLD.sequence),
COALESCE(NEW.point, OLD.point),
COALESCE(NEW.remarks, OLD.remarks),
COALESCE(NEW.labels, OLD.labels),
COALESCE(NEW.meta, OLD.meta)
);
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
-- Set end of validity.
-- NOTE: We *do* allow deleting an event that has meta.readonly = true
-- This could be of interest if for instance we wanted to keep the history
-- of QC results for a point, provided that the QC routines write to
-- event_log and not event_log_full
UPDATE event_log_full
SET validity = tstzrange(lower(validity), current_timestamp)
WHERE validity @> current_timestamp AND id = OLD.id;
RETURN NULL;
END IF;
END;
$inner$;
CREATE OR REPLACE TRIGGER event_log_tg INSTEAD OF INSERT OR DELETE OR UPDATE ON event_log FOR EACH ROW EXECUTE FUNCTION event_log_update();
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_16 () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_16();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_16 ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.3"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.3"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,163 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.4
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This creates a new procedure augment_event_data() which tries to
-- populate missing event_log data, namely timestamps and geometries.
--
-- To do this it also adds a function public.geometry_from_tstamp()
-- which, given a timestamp, tries to fetch a geometry from real_time_inputs.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE PROCEDURE augment_event_data ()
LANGUAGE sql
AS $inner$
-- Populate the timestamp of sequence / point events
UPDATE event_log_full
SET tstamp = tstamp_from_sequence_shot(sequence, point)
WHERE
tstamp IS NULL AND sequence IS NOT NULL AND point IS NOT NULL;
-- Populate the geometry of sequence / point events for which
-- there is raw_shots data.
UPDATE event_log_full
SET meta = meta ||
jsonb_build_object(
'geometry',
(
SELECT st_transform(geometry, 4326)::jsonb
FROM raw_shots rs
WHERE rs.sequence = event_log_full.sequence AND rs.point = event_log_full.point
)
)
WHERE
sequence IS NOT NULL AND point IS NOT NULL AND
NOT meta ? 'geometry';
-- Populate the geometry of time-based events
UPDATE event_log_full e
SET
meta = meta || jsonb_build_object('geometry',
(SELECT st_transform(g.geometry, 4326)::jsonb
FROM geometry_from_tstamp(e.tstamp, 3) g))
WHERE
tstamp IS NOT NULL AND
sequence IS NULL AND point IS NULL AND
NOT meta ? 'geometry';
-- Get rid of null geometries
UPDATE event_log_full
SET
meta = meta - 'geometry'
WHERE
jsonb_typeof(meta->'geometry') = 'null';
-- Simplify the GeoJSON when the CRS is EPSG:4326
UPDATE event_log_full
SET
meta = meta #- '{geometry, crs}'
WHERE
meta->'geometry'->'crs'->'properties'->>'name' = 'EPSG:4326';
$inner$;
COMMENT ON PROCEDURE augment_event_data()
IS 'Populate missing timestamps and geometries in event_log_full';
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_17 () AS $$
DECLARE
row RECORD;
BEGIN
CALL show_notice('Adding index to real_time_inputs.meta->tstamp');
CREATE INDEX IF NOT EXISTS meta_tstamp_idx
ON public.real_time_inputs
USING btree ((meta->>'tstamp') DESC);
CALL show_notice('Creating function geometry_from_tstamp');
CREATE OR REPLACE FUNCTION public.geometry_from_tstamp(
IN ts timestamptz,
IN tolerance numeric,
OUT "geometry" geometry,
OUT "delta" numeric)
AS $inner$
SELECT
geometry,
extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ) AS delta
FROM real_time_inputs
WHERE
geometry IS NOT NULL AND
abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts )) < tolerance
ORDER BY abs(extract('epoch' FROM (meta->>'tstamp')::timestamptz - ts ))
LIMIT 1;
$inner$ LANGUAGE SQL;
COMMENT ON FUNCTION public.geometry_from_tstamp(timestamptz, numeric)
IS 'Get geometry from timestamp';
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_17();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_17 ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.4"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.4"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,158 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.5
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- The function label_in_sequence(integer, text) was missing for the
-- production schemas. This patch (re-)defines the function as well
-- as other function that depend on it (otherwise it does not get
-- picked up).
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION label_in_sequence(_sequence integer, _label text) RETURNS event_log
LANGUAGE sql
AS $inner$
SELECT * FROM event_log WHERE sequence = _sequence AND _label = ANY(labels);
$inner$;
-- We need to redefine the functions / procedures that call label_in_sequence
CREATE OR REPLACE PROCEDURE handle_final_line_events(IN _seq integer, IN _label text, IN _column text)
LANGUAGE plpgsql
AS $inner$
DECLARE
_line final_lines_summary%ROWTYPE;
_column_value integer;
_tg_name text := 'final_line';
_event event_log%ROWTYPE;
event_id integer;
BEGIN
SELECT * INTO _line FROM final_lines_summary WHERE sequence = _seq;
_event := label_in_sequence(_seq, _label);
_column_value := row_to_json(_line)->>_column;
--RAISE NOTICE '% is %', _label, _event;
--RAISE NOTICE 'Line is %', _line;
--RAISE NOTICE '% is % (%)', _column, _column_value, _label;
IF _event IS NULL THEN
--RAISE NOTICE 'We will populate the event log from the sequence data';
INSERT INTO event_log (sequence, point, remarks, labels, meta)
VALUES (
-- The sequence
_seq,
-- The shotpoint
_column_value,
-- Remark. Something like "FSP <linename>"
format('%s %s', _label, (SELECT meta->>'lineName' FROM final_lines WHERE sequence = _seq)),
-- Label
ARRAY[_label],
-- Meta. Something like {"auto" : {"FSP" : "final_line"}}
json_build_object('auto', json_build_object(_label, _tg_name))
);
ELSE
--RAISE NOTICE 'We may populate the sequence meta from the event log';
--RAISE NOTICE 'Unless the event log was populated by us previously';
--RAISE NOTICE 'Populated by us previously? %', _event.meta->'auto'->>_label = _tg_name;
IF _event.meta->'auto'->>_label IS DISTINCT FROM _tg_name THEN
--RAISE NOTICE 'Adding % found in events log to final_line meta', _label;
UPDATE final_lines
SET meta = jsonb_set(meta, ARRAY[_label], to_jsonb(_event.point))
WHERE sequence = _seq;
END IF;
END IF;
END;
$inner$;
CREATE OR REPLACE PROCEDURE final_line_post_import(IN _seq integer)
LANGUAGE plpgsql
AS $inner$
BEGIN
CALL handle_final_line_events(_seq, 'FSP', 'fsp');
CALL handle_final_line_events(_seq, 'FGSP', 'fsp');
CALL handle_final_line_events(_seq, 'LGSP', 'lsp');
CALL handle_final_line_events(_seq, 'LSP', 'lsp');
END;
$inner$;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_18 () AS $$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
CALL pg_temp.upgrade_18();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade_18 ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.5"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.5"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,162 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.6
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This optimises geometry_from_tstamp() by many orders of magnitude
-- (issue #241). The redefinition of geometry_from_tstamp() necessitates
-- redefining dependent functions.
--
-- We also drop the index on real_time_inputs.meta->'tstamp' as it is no
-- longer used.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE PROCEDURE augment_event_data ()
LANGUAGE sql
AS $inner$
-- Populate the timestamp of sequence / point events
UPDATE event_log_full
SET tstamp = tstamp_from_sequence_shot(sequence, point)
WHERE
tstamp IS NULL AND sequence IS NOT NULL AND point IS NOT NULL;
-- Populate the geometry of sequence / point events for which
-- there is raw_shots data.
UPDATE event_log_full
SET meta = meta ||
jsonb_build_object(
'geometry',
(
SELECT st_transform(geometry, 4326)::jsonb
FROM raw_shots rs
WHERE rs.sequence = event_log_full.sequence AND rs.point = event_log_full.point
)
)
WHERE
sequence IS NOT NULL AND point IS NOT NULL AND
NOT meta ? 'geometry';
-- Populate the geometry of time-based events
UPDATE event_log_full e
SET
meta = meta || jsonb_build_object('geometry',
(SELECT st_transform(g.geometry, 4326)::jsonb
FROM geometry_from_tstamp(e.tstamp, 3) g))
WHERE
tstamp IS NOT NULL AND
sequence IS NULL AND point IS NULL AND
NOT meta ? 'geometry';
-- Get rid of null geometries
UPDATE event_log_full
SET
meta = meta - 'geometry'
WHERE
jsonb_typeof(meta->'geometry') = 'null';
-- Simplify the GeoJSON when the CRS is EPSG:4326
UPDATE event_log_full
SET
meta = meta #- '{geometry, crs}'
WHERE
meta->'geometry'->'crs'->'properties'->>'name' = 'EPSG:4326';
$inner$;
COMMENT ON PROCEDURE augment_event_data()
IS 'Populate missing timestamps and geometries in event_log_full';
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
CALL show_notice('Dropping index from real_time_inputs.meta->tstamp');
DROP INDEX IF EXISTS meta_tstamp_idx;
CALL show_notice('Creating function geometry_from_tstamp');
CREATE OR REPLACE FUNCTION public.geometry_from_tstamp(
IN ts timestamptz,
IN tolerance numeric,
OUT "geometry" geometry,
OUT "delta" numeric)
AS $inner$
SELECT
geometry,
extract('epoch' FROM tstamp - ts ) AS delta
FROM real_time_inputs
WHERE
geometry IS NOT NULL AND
tstamp BETWEEN (ts - tolerance * interval '1 second') AND (ts + tolerance * interval '1 second')
ORDER BY abs(extract('epoch' FROM tstamp - ts ))
LIMIT 1;
$inner$ LANGUAGE SQL;
COMMENT ON FUNCTION public.geometry_from_tstamp(timestamptz, numeric)
IS 'Get geometry from timestamp';
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.6"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.6"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,254 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.7
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This updates the adjust_planner() procedure to take into account the
-- new events schema (the `event` view has been replaced by `event_log`).
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CALL pg_temp.show_notice('Replacing adjust_planner() procedure');
CREATE OR REPLACE PROCEDURE adjust_planner()
LANGUAGE plpgsql
AS $$
DECLARE
_planner_config jsonb;
_planned_line planned_lines%ROWTYPE;
_lag interval;
_last_sequence sequences_summary%ROWTYPE;
_deltatime interval;
_shotinterval interval;
_tstamp timestamptz;
_incr integer;
BEGIN
SET CONSTRAINTS planned_lines_pkey DEFERRED;
SELECT data->'planner'
INTO _planner_config
FROM file_data
WHERE data ? 'planner';
SELECT *
INTO _last_sequence
FROM sequences_summary
ORDER BY sequence DESC
LIMIT 1;
SELECT *
INTO _planned_line
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
SELECT
COALESCE(
((lead(ts0) OVER (ORDER BY sequence)) - ts1),
make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer)
)
INTO _lag
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
_incr = sign(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE '_planner_config: %', _planner_config;
RAISE NOTICE '_last_sequence: %', _last_sequence;
RAISE NOTICE '_planned_line: %', _planned_line;
RAISE NOTICE '_incr: %', _incr;
-- Does the latest sequence match a planned sequence?
IF _planned_line IS NULL THEN -- No it doesn't
RAISE NOTICE 'Latest sequence shot does not match a planned sequence';
SELECT * INTO _planned_line FROM planned_lines ORDER BY sequence ASC LIMIT 1;
RAISE NOTICE '_planned_line: %', _planned_line;
IF _planned_line.sequence <= _last_sequence.sequence THEN
RAISE NOTICE 'Renumbering the planned sequences starting from %', _planned_line.sequence + 1;
-- Renumber the planned sequences starting from last shot sequence number + 1
UPDATE planned_lines
SET sequence = sequence + _last_sequence.sequence - _planned_line.sequence + 1;
END IF;
-- The correction to make to the first planned line's ts0 will be based on either the last
-- sequence's EOL + default line change time or the current time, whichever is later.
_deltatime := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1) + make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer), current_timestamp) - _planned_line.ts0;
-- Is the first of the planned lines start time in the past? (±5 mins)
IF _planned_line.ts0 < (current_timestamp - make_interval(mins => 5)) THEN
RAISE NOTICE 'First planned line is in the past. Adjusting times by %', _deltatime;
-- Adjust the start / end time of the planned lines by assuming that we are at
-- `defaultLineChangeDuration` minutes away from SOL of the first planned line.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime;
END IF;
ELSE -- Yes it does
RAISE NOTICE 'Latest sequence does match a planned sequence: %, %', _planned_line.sequence, _planned_line.line;
-- Is it online?
IF EXISTS(SELECT 1 FROM raw_lines_files WHERE sequence = _last_sequence.sequence AND hash = '*online*') THEN
-- Yes it is
RAISE NOTICE 'Sequence % is online', _last_sequence.sequence;
-- Let us get the SOL from the events log if we can
RAISE NOTICE 'Trying to set fsp, ts0 from events log FSP, FGSP';
WITH e AS (
SELECT * FROM event_log
WHERE
sequence = _last_sequence.sequence
AND ('FSP' = ANY(labels) OR 'FGSP' = ANY(labels))
ORDER BY tstamp LIMIT 1
)
UPDATE planned_lines
SET
fsp = COALESCE(e.point, fsp),
ts0 = COALESCE(e.tstamp, ts0)
FROM e
WHERE planned_lines.sequence = _last_sequence.sequence;
-- Shot interval
_shotinterval := (_last_sequence.ts1 - _last_sequence.ts0) / abs(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE 'Estimating EOL from current shot interval: %', _shotinterval;
SELECT (abs(lsp-fsp) * _shotinterval + ts0) - ts1
INTO _deltatime
FROM planned_lines
WHERE sequence = _last_sequence.sequence;
---- Set ts1 for the current sequence
--UPDATE planned_lines
--SET
--ts1 = (abs(lsp-fsp) * _shotinterval) + ts0
--WHERE sequence = _last_sequence.sequence;
RAISE NOTICE 'Adjustment is %', _deltatime;
IF abs(EXTRACT(EPOCH FROM _deltatime)) < 8 THEN
RAISE NOTICE 'Adjustment too small (< 8 s), so not applying it';
RETURN;
END IF;
-- Adjust ts1 for the current sequence
UPDATE planned_lines
SET ts1 = ts1 + _deltatime
WHERE sequence = _last_sequence.sequence;
-- Now shift all sequences after
UPDATE planned_lines
SET ts0 = ts0 + _deltatime, ts1 = ts1 + _deltatime
WHERE sequence > _last_sequence.sequence;
RAISE NOTICE 'Deleting planned sequences before %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence < _last_sequence.sequence;
ELSE
-- No it isn't
RAISE NOTICE 'Sequence % is offline', _last_sequence.sequence;
-- We were supposed to finish at _planned_line.ts1 but we finished at:
_tstamp := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1), current_timestamp);
-- WARNING Next line is for testing only
--_tstamp := COALESCE(_last_sequence.ts1_final, _last_sequence.ts1);
-- So we need to adjust timestamps by:
_deltatime := _tstamp - _planned_line.ts1;
RAISE NOTICE 'Planned end: %, actual end: % (%, %)', _planned_line.ts1, _tstamp, _planned_line.sequence, _last_sequence.sequence;
RAISE NOTICE 'Shifting times by % for sequences > %', _deltatime, _planned_line.sequence;
-- NOTE: This won't work if sequences are not, err… sequential.
-- NOTE: This has been known to happen in 2020.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime
WHERE sequence > _planned_line.sequence;
RAISE NOTICE 'Deleting planned sequences up to %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence <= _last_sequence.sequence;
END IF;
END IF;
END;
$$;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.7"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.7"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,267 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.8
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adds event_position() and event_meta() functions which are used
-- to retrieve position or metadata, respectively, given either a timestamp
-- or a sequence / point pair. Intended to be used in the context of #229.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
--
-- event_position(): Fetch event position
--
CREATE OR REPLACE FUNCTION event_position (
tstamp timestamptz, sequence integer, point integer, tolerance numeric
)
RETURNS geometry
AS $$
DECLARE
position geometry;
BEGIN
-- Try and get position by sequence / point first
IF sequence IS NOT NULL AND point IS NOT NULL THEN
-- Try and get the position from final_shots or raw_shots
SELECT COALESCE(f.geometry, r.geometry) geometry
INTO position
FROM raw_shots r LEFT JOIN final_shots f USING (sequence, point)
WHERE r.sequence = event_position.sequence AND r.point = event_position.point;
IF position IS NOT NULL THEN
RETURN position;
ELSIF tstamp IS NULL THEN
-- Get the timestamp for the sequence / point, if we can.
-- It will be used later in the function as we fall back
-- to timestamp based search.
-- We also adjust the tolerance as we're now dealing with
-- an exact timestamp.
SELECT COALESCE(f.tstamp, r.tstamp) tstamp, 0.002 tolerance
INTO tstamp, tolerance
FROM raw_shots r LEFT JOIN final_shots f USING (sequence, point)
WHERE r.sequence = event_position.sequence AND r.point = event_position.point;
END IF;
END IF;
-- If we got here, we better have a timestamp
-- First attempt, get a position from final_shots, raw_shots. This may
-- be redundant if we got here from the position of having a sequence /
-- point without a position, but never mind.
SELECT COALESCE(f.geometry, r.geometry) geometry
INTO position
FROM raw_shots r LEFT JOIN final_shots f USING (sequence, point)
WHERE r.tstamp = event_position.tstamp OR f.tstamp = event_position.tstamp
LIMIT 1; -- Just to be sure
IF position IS NULL THEN
-- Ok, so everything else so far has failed, let's try and get this
-- from real time data. We skip the search via sequence / point and
-- go directly for timestamp.
SELECT geometry
INTO position
FROM geometry_from_tstamp(tstamp, tolerance);
END IF;
RETURN position;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_position (timestamptz, integer, integer, numeric) IS
'Return the position associated with a sequence / point in the current project or
with a given timestamp. Timestamp that is first searched for in the shot tables
of the current prospect or, if not found, in the real-time data.
Returns a geometry.';
CREATE OR REPLACE FUNCTION event_position (
tstamp timestamptz, sequence integer, point integer
)
RETURNS geometry
AS $$
BEGIN
RETURN event_position(tstamp, sequence, point, 3);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_position (timestamptz, integer, integer) IS
'Overload of event_position with a default tolerance of three seconds.';
CREATE OR REPLACE FUNCTION event_position (
tstamp timestamptz
)
RETURNS geometry
AS $$
BEGIN
RETURN event_position(tstamp, NULL, NULL);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_position (timestamptz) IS
'Overload of event_position (timestamptz, integer, integer) for use when searching by timestamp.';
CREATE OR REPLACE FUNCTION event_position (
sequence integer, point integer
)
RETURNS geometry
AS $$
BEGIN
RETURN event_position(NULL, sequence, point);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_position (integer, integer) IS
'Overload of event_position (timestamptz, integer, integer) for use when searching by sequence / point.';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
--
-- event_meta(): Fetch event metadata
--
CREATE OR REPLACE FUNCTION event_meta (
tstamp timestamptz, sequence integer, point integer
)
RETURNS jsonb
AS $$
DECLARE
result jsonb;
-- Tolerance is hard-coded, at least until a need to expose arises.
tolerance numeric;
BEGIN
tolerance := 3; -- seconds
-- We search by timestamp if we can, as that's a lot quicker
IF tstamp IS NOT NULL THEN
SELECT meta
INTO result
FROM real_time_inputs rti
WHERE
rti.tstamp BETWEEN (event_meta.tstamp - tolerance * interval '1 second') AND (event_meta.tstamp + tolerance * interval '1 second')
ORDER BY abs(extract('epoch' FROM rti.tstamp - event_meta.tstamp ))
LIMIT 1;
ELSE
SELECT meta
INTO result
FROM real_time_inputs rti
WHERE
(meta->>'_sequence')::integer = event_meta.sequence AND
(meta->>'_point')::integer = event_meta.point
ORDER BY rti.tstamp DESC
LIMIT 1;
END IF;
RETURN result;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_meta (timestamptz, integer, integer) IS
'Return the real-time event metadata associated with a sequence / point in the current project or
with a given timestamp. Timestamp that is first searched for in the shot tables
of the current prospect or, if not found, in the real-time data.
Returns a JSONB object.';
CREATE OR REPLACE FUNCTION event_meta (
tstamp timestamptz
)
RETURNS jsonb
AS $$
BEGIN
RETURN event_meta(tstamp, NULL, NULL);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_meta (timestamptz) IS
'Overload of event_meta (timestamptz, integer, integer) for use when searching by timestamp.';
CREATE OR REPLACE FUNCTION event_meta (
sequence integer, point integer
)
RETURNS jsonb
AS $$
BEGIN
RETURN event_meta(NULL, sequence, point);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION event_meta (integer, integer) IS
'Overload of event_meta (timestamptz, integer, integer) for use when searching by sequence / point.';
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.8"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.8"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,229 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.9
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This defines a replace_placeholders() function, taking as arguments
-- a text string and either a timestamp or a sequence / point pair. It
-- uses the latter arguments to find metadata from which it can extract
-- relevant information and replace it into the text string wherever the
-- appropriate placeholders appear. For instance, given a call such as
-- replace_placeholders('The position is @POS@', NULL, 11, 2600) it will
-- replace '@POS@' with the position of point 2600 in sequence 11, if it
-- exists (or leave the placeholder untouched otherwise).
--
-- A scan_placeholders() procedure is also defined, which calls the above
-- function on the entire event log.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION replace_placeholders (
text_in text, tstamp timestamptz, sequence integer, point integer
)
RETURNS text
AS $$
DECLARE
position geometry;
metadata jsonb;
text_out text;
json_query text;
json_result jsonb;
expect_recursion boolean := false;
BEGIN
text_out := text_in;
-- We only get a position if we are going to need it…
IF regexp_match(text_out, '@DMS@|@POS@|@DEG@') IS NOT NULL THEN
position := ST_Transform(event_position(tstamp, sequence, point), 4326);
END IF;
-- …and likewise with the metadata.
IF regexp_match(text_out, '@BSP@|@WD@|@CMG@|@EN@|@GRID@|@(\$\..*?)@@') IS NOT NULL THEN
metadata := event_meta(tstamp, sequence, point);
END IF;
-- We shortcut the evaluation if neither of the above regexps matched
IF position IS NULL AND metadata IS NULL THEN
RETURN text_out;
END IF;
IF position('@DMS@' IN text_out) != 0 THEN
text_out := replace(text_out, '@DMS@', ST_AsLatLonText(position));
END IF;
IF position('@POS@' IN text_out) != 0 THEN
text_out := replace(text_out, '@POS@', replace(ST_AsLatLonText(position, 'D.DDDDDD'), ' ', ', '));
END IF;
IF position('@DEG@' IN text_out) != 0 THEN
text_out := replace(text_out, '@DEG@', replace(ST_AsLatLonText(position, 'D.DDDDDD'), ' ', ', '));
END IF;
IF position('@EN@' IN text_out) != 0 THEN
IF metadata ? 'easting' AND metadata ? 'northing' THEN
text_out := replace(text_out, '@EN@', (metadata->>'easting') || ', ' || (metadata->>'northing'));
END IF;
END IF;
IF position('@GRID@' IN text_out) != 0 THEN
IF metadata ? 'easting' AND metadata ? 'northing' THEN
text_out := replace(text_out, '@GRID@', (metadata->>'easting') || ', ' || (metadata->>'northing'));
END IF;
END IF;
IF position('@CMG@' IN text_out) != 0 THEN
IF metadata ? 'bearing' THEN
text_out := replace(text_out, '@CMG@', metadata->>'bearing');
END IF;
END IF;
IF position('@BSP@' IN text_out) != 0 THEN
IF metadata ? 'speed' THEN
text_out := replace(text_out, '@BSP@', round((metadata->>'speed')::numeric * 3600 / 1852, 1)::text);
END IF;
END IF;
IF position('@WD@' IN text_out) != 0 THEN
IF metadata ? 'waterDepth' THEN
text_out := replace(text_out, '@WD@', metadata->>'waterDepth');
END IF;
END IF;
json_query := (regexp_match(text_out, '@(\$\..*?)@@'))[1];
IF json_query IS NOT NULL THEN
json_result := jsonb_path_query_array(metadata, json_query::jsonpath);
IF jsonb_array_length(json_result) = 1 THEN
text_out := replace(text_out, '@'||json_query||'@@', json_result->>0);
ELSE
text_out := replace(text_out, '@'||json_query||'@@', json_result::text);
END IF;
-- There might be multiple JSONPath queries, so we may have to recurse
expect_recursion := true;
END IF;
IF expect_recursion IS TRUE AND text_in != text_out THEN
--RAISE NOTICE 'Recursing %', text_out;
-- We don't know if we have found all the JSONPath expression
-- so we do another pass.
RETURN replace_placeholders(text_out, tstamp, sequence, point);
ELSE
RETURN text_out;
END IF;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION replace_placeholders (text, timestamptz, integer, integer) IS
'Replace certain placeholder strings in the input text with data obtained from shot or real-time data.';
CREATE OR REPLACE PROCEDURE scan_placeholders ()
LANGUAGE sql
AS $$
-- We update non read-only events via the event_log view to leave a trace
-- of the fact that placeholders were replaced (and when).
-- Note that this will not replace placeholders of old edits.
UPDATE event_log
SET remarks = replace_placeholders(remarks, tstamp, sequence, point)
FROM (
SELECT id
FROM event_log e
WHERE
(meta->'readonly')::boolean IS NOT TRUE AND (
regexp_match(remarks, '@DMS@|@POS@|@DEG@') IS NOT NULL OR
regexp_match(remarks, '@BSP@|@WD@|@CMG@|@EN@|@GRID@|@(\$\..*?)@@') IS NOT NULL
)
) t
WHERE event_log.id = t.id;
-- And then we update read-only events directly on the event_log_full table
-- (as of this version of the schema we're prevented from updating read-only
-- events via event_log anyway).
UPDATE event_log_full
SET remarks = replace_placeholders(remarks, tstamp, sequence, point)
FROM (
SELECT uid
FROM event_log_full e
WHERE
(meta->'readonly')::boolean IS TRUE AND (
regexp_match(remarks, '@DMS@|@POS@|@DEG@') IS NOT NULL OR
regexp_match(remarks, '@BSP@|@WD@|@CMG@|@EN@|@GRID@|@(\$\..*?)@@') IS NOT NULL
)
) t
WHERE event_log_full.uid = t.uid;
$$;
COMMENT ON PROCEDURE scan_placeholders () IS
'Run replace_placeholders() on the entire event log.';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.9"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.9"}' WHERE public.info.key = 'version';
CALL show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,127 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.10
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects only the public schema.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This defines a interpolate_geometry_from_tstamp(), taking a timestamp
-- and a maximum timespan in seconds. It will then interpolate a position
-- at the exact timestamp based on data from real_time_inputs, provided
-- that the effective interpolation timespan does not exceed the maximum
-- requested.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
BEGIN
CALL pg_temp.show_notice('Defining interpolate_geometry_from_tstamp()');
CREATE OR REPLACE FUNCTION public.interpolate_geometry_from_tstamp(
IN ts timestamptz,
IN maxspan numeric
)
RETURNS geometry
AS $$
DECLARE
ts0 timestamptz;
ts1 timestamptz;
geom0 geometry;
geom1 geometry;
span numeric;
fraction numeric;
BEGIN
SELECT tstamp, geometry
INTO ts0, geom0
FROM real_time_inputs
WHERE tstamp <= ts
ORDER BY tstamp DESC
LIMIT 1;
SELECT tstamp, geometry
INTO ts1, geom1
FROM real_time_inputs
WHERE tstamp >= ts
ORDER BY tstamp ASC
LIMIT 1;
IF geom0 IS NULL OR geom1 IS NULL THEN
RAISE NOTICE 'Interpolation failed (no straddling data)';
RETURN NULL;
END IF;
-- See if we got an exact match
IF ts0 = ts THEN
RETURN geom0;
ELSIF ts1 = ts THEN
RETURN geom1;
END IF;
span := extract('epoch' FROM ts1 - ts0);
IF span > maxspan THEN
RAISE NOTICE 'Interpolation timespan % outside maximum requested (%)', span, maxspan;
RETURN NULL;
END IF;
fraction := extract('epoch' FROM ts - ts0) / span;
IF fraction < 0 OR fraction > 1 THEN
RAISE NOTICE 'Requested timestamp % outside of interpolation span (fraction: %)', ts, fraction;
RETURN NULL;
END IF;
RETURN ST_LineInterpolatePoint(St_MakeLine(geom0, geom1), fraction);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION public.interpolate_geometry_from_tstamp(timestamptz, numeric) IS
'Interpolate a position over a given maximum timespan (in seconds)
based on real-time inputs. Returns a POINT geometry.';
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.10"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.10"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,149 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.11
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This redefines augment_event_data() to use interpolation rather than
-- nearest neighbour. It now takes an argument indicating the maximum
-- allowed interpolation timespan. An overload with a default of ten
-- minutes is also provided, as an in situ replacement for the previous
-- version.
--
-- The ten minute default is based on Triggerfish headers behaviour seen
-- on crew 248 during soft starts.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE PROCEDURE augment_event_data (maxspan numeric)
LANGUAGE sql
AS $$
-- Populate the timestamp of sequence / point events
UPDATE event_log_full
SET tstamp = tstamp_from_sequence_shot(sequence, point)
WHERE
tstamp IS NULL AND sequence IS NOT NULL AND point IS NOT NULL;
-- Populate the geometry of sequence / point events for which
-- there is raw_shots data.
UPDATE event_log_full
SET meta = meta ||
jsonb_build_object(
'geometry',
(
SELECT st_transform(geometry, 4326)::jsonb
FROM raw_shots rs
WHERE rs.sequence = event_log_full.sequence AND rs.point = event_log_full.point
)
)
WHERE
sequence IS NOT NULL AND point IS NOT NULL AND
NOT meta ? 'geometry';
-- Populate the geometry of time-based events
UPDATE event_log_full e
SET
meta = meta || jsonb_build_object('geometry',
(SELECT st_transform(g.geometry, 4326)::jsonb
FROM interpolate_geometry_from_tstamp(e.tstamp, maxspan) g))
WHERE
tstamp IS NOT NULL AND
sequence IS NULL AND point IS NULL AND
NOT meta ? 'geometry';
-- Get rid of null geometries
UPDATE event_log_full
SET
meta = meta - 'geometry'
WHERE
jsonb_typeof(meta->'geometry') = 'null';
-- Simplify the GeoJSON when the CRS is EPSG:4326
UPDATE event_log_full
SET
meta = meta #- '{geometry, crs}'
WHERE
meta->'geometry'->'crs'->'properties'->>'name' = 'EPSG:4326';
$$;
COMMENT ON PROCEDURE augment_event_data(numeric)
IS 'Populate missing timestamps and geometries in event_log_full';
CREATE OR REPLACE PROCEDURE augment_event_data ()
LANGUAGE sql
AS $$
CALL augment_event_data(600);
$$;
COMMENT ON PROCEDURE augment_event_data()
IS 'Overload of augment_event_data(maxspan numeric) with a maxspan value of 600 seconds.';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.11"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.11"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,193 @@
-- Fix not being able to edit a time-based event.
--
-- New schema version: 0.3.12
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This defines a midnight_shots view and a log_midnight_shots() procedure
-- (with some overloads). The view returns all points straddling midnight
-- UTC and belonging to the same sequence (so last shot of the day and
-- first shot of the next day).
--
-- The procedure inserts the corresponding events (optionally constrained
-- by an earliest and a latest date) in the event log, unless the events
-- already exist.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW midnight_shots AS
WITH straddlers AS (
-- Get sequence numbers straddling midnight UTC
SELECT sequence
FROM final_shots
GROUP BY sequence
HAVING min(date(tstamp)) != max(date(tstamp))
),
ts AS (
-- Get earliest and latest timestamps for each day
-- for each of the above sequences.
-- This will return the timestamps for:
-- FSP, LDSP, FDSP, LSP.
SELECT
fs.sequence,
min(fs.tstamp) AS ts0,
max(fs.tstamp) AS ts1
FROM final_shots fs INNER JOIN straddlers USING (sequence)
GROUP BY fs.sequence, (date(fs.tstamp))
ORDER BY fs.sequence, date(fs.tstamp)
),
spts AS (
-- Filter out FSP, LSP from the above.
-- NOTE: This *should* in theory be able to cope with
-- a sequence longer than 24 hours (so with more than
-- one LDSP, FDSP) but that hasn't been tested.
SELECT DISTINCT
sequence,
min(ts1) OVER (PARTITION BY sequence) ldsp,
max(ts0) OVER (PARTITION BY sequence) fdsp
FROM ts
ORDER BY sequence
), evt AS (
SELECT
fs.tstamp,
fs.sequence,
point,
'Last shotpoint of the day' remarks,
'{LDSP}'::text[] labels
FROM final_shots fs
INNER JOIN spts ON fs.sequence = spts.sequence AND fs.tstamp = spts.ldsp
UNION SELECT
fs.tstamp,
fs.sequence,
point,
'First shotpoint of the day' remarks,
'{FDSP}'::text[] labels
FROM final_shots fs
INNER JOIN spts ON fs.sequence = spts.sequence AND fs.tstamp = spts.fdsp
ORDER BY tstamp
)
SELECT * FROM evt;
CREATE OR REPLACE PROCEDURE log_midnight_shots (dt0 date, dt1 date)
LANGUAGE sql
AS $$
INSERT INTO event_log (sequence, point, remarks, labels, meta)
SELECT
sequence, point, remarks, labels,
'{"auto": true, "insertedBy": "log_midnight_shots"}'::jsonb
FROM midnight_shots ms
WHERE
(dt0 IS NULL OR ms.tstamp >= dt0) AND
(dt1 IS NULL OR ms.tstamp <= dt1) AND
NOT EXISTS (
SELECT 1
FROM event_log el
WHERE ms.sequence = el.sequence AND ms.point = el.point AND el.labels @> ms.labels
);
-- Delete any midnight shots that might have been inserted in the log
-- but are no longer relevant according to the final_shots data.
-- We operate on event_log, so the deletion is traceable.
DELETE
FROM event_log
WHERE id IN (
SELECT id
FROM event_log el
LEFT JOIN midnight_shots ms USING (sequence, point)
WHERE
'{LDSP,FDSP}'::text[] && el.labels -- &&: Do the arrays overlap?
AND ms.sequence IS NULL
);
$$;
COMMENT ON PROCEDURE log_midnight_shots (date, date)
IS 'Add midnight shots between two dates dt0 and dt1 to the event_log, unless the events already exist.';
CREATE OR REPLACE PROCEDURE log_midnight_shots (dt0 date)
LANGUAGE sql
AS $$
CALL log_midnight_shots(dt0, NULL);
$$;
COMMENT ON PROCEDURE log_midnight_shots (date)
IS 'Overload taking only a dt0 (adds events on that date or after).';
CREATE OR REPLACE PROCEDURE log_midnight_shots ()
LANGUAGE sql
AS $$
CALL log_midnight_shots(NULL, NULL);
$$;
COMMENT ON PROCEDURE log_midnight_shots ()
IS 'Overload taking no arguments (adds all missing events).';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
BEGIN
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.12"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.12"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,162 @@
-- Fix wrong number of missing shots in summary views
--
-- New schema version: 0.3.13
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- Fixes a bug in the `final_lines_summary` and `raw_lines_summary` views
-- which results in the number of missing shots being miscounted on jobs
-- using three sources.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW raw_lines_summary AS
WITH summary AS (
SELECT DISTINCT rs.sequence,
first_value(rs.point) OVER w AS fsp,
last_value(rs.point) OVER w AS lsp,
first_value(rs.tstamp) OVER w AS ts0,
last_value(rs.tstamp) OVER w AS ts1,
count(rs.point) OVER w AS num_points,
count(pp.point) OVER w AS num_preplots,
public.st_distance(first_value(rs.geometry) OVER w, last_value(rs.geometry) OVER w) AS length,
((public.st_azimuth(first_value(rs.geometry) OVER w, last_value(rs.geometry) OVER w) * (180)::double precision) / pi()) AS azimuth
FROM (raw_shots rs
LEFT JOIN preplot_points pp USING (line, point))
WINDOW w AS (PARTITION BY rs.sequence ORDER BY rs.tstamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT rl.sequence,
rl.line,
s.fsp,
s.lsp,
s.ts0,
s.ts1,
(s.ts1 - s.ts0) AS duration,
s.num_points,
s.num_preplots,
(SELECT count(*) AS count
FROM missing_sequence_raw_points
WHERE missing_sequence_raw_points.sequence = s.sequence) AS missing_shots,
s.length,
s.azimuth,
rl.remarks,
rl.ntbp,
rl.meta
FROM (summary s
JOIN raw_lines rl USING (sequence));
CREATE OR REPLACE VIEW final_lines_summary AS
WITH summary AS (
SELECT DISTINCT fs.sequence,
first_value(fs.point) OVER w AS fsp,
last_value(fs.point) OVER w AS lsp,
first_value(fs.tstamp) OVER w AS ts0,
last_value(fs.tstamp) OVER w AS ts1,
count(fs.point) OVER w AS num_points,
public.st_distance(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) AS length,
((public.st_azimuth(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) * (180)::double precision) / pi()) AS azimuth
FROM final_shots fs
WINDOW w AS (PARTITION BY fs.sequence ORDER BY fs.tstamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT fl.sequence,
fl.line,
s.fsp,
s.lsp,
s.ts0,
s.ts1,
(s.ts1 - s.ts0) AS duration,
s.num_points,
( SELECT count(*) AS count
FROM missing_sequence_final_points
WHERE missing_sequence_final_points.sequence = s.sequence) AS missing_shots,
s.length,
s.azimuth,
fl.remarks,
fl.meta
FROM (summary s
JOIN final_lines fl USING (sequence));
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.3.13' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.3.12' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.3.13"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.3.13"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,122 @@
-- Fix wrong number of missing shots in summary views
--
-- New schema version: 0.4.0
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adapts the schema to the change in how project configurations are
-- handled (https://gitlab.com/wgp/dougal/software/-/merge_requests/29)
-- by creating a project_configuration() function which returns the
-- current project's configuration data.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION project_configuration()
RETURNS jsonb
LANGUAGE plpgsql
AS $$
DECLARE
schema_name text;
configuration jsonb;
BEGIN
SELECT nspname
INTO schema_name
FROM pg_namespace
WHERE oid = (
SELECT pronamespace
FROM pg_proc
WHERE oid = 'project_configuration'::regproc::oid
);
SELECT meta
INTO configuration
FROM public.projects
WHERE schema = schema_name;
RETURN configuration;
END
$$;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.0' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.3.12' AND current_db_version != '0.3.13' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.0"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,264 @@
-- Fix wrong number of missing shots in summary views
--
-- New schema version: 0.4.1
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This modifies adjust_planner() to use project_configuration()
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE PROCEDURE adjust_planner()
LANGUAGE plpgsql
AS $$
DECLARE
_planner_config jsonb;
_planned_line planned_lines%ROWTYPE;
_lag interval;
_last_sequence sequences_summary%ROWTYPE;
_deltatime interval;
_shotinterval interval;
_tstamp timestamptz;
_incr integer;
BEGIN
SET CONSTRAINTS planned_lines_pkey DEFERRED;
SELECT project_configuration()->'planner'
INTO _planner_config;
SELECT *
INTO _last_sequence
FROM sequences_summary
ORDER BY sequence DESC
LIMIT 1;
SELECT *
INTO _planned_line
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
SELECT
COALESCE(
((lead(ts0) OVER (ORDER BY sequence)) - ts1),
make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer)
)
INTO _lag
FROM planned_lines
WHERE sequence = _last_sequence.sequence AND line = _last_sequence.line;
_incr = sign(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE '_planner_config: %', _planner_config;
RAISE NOTICE '_last_sequence: %', _last_sequence;
RAISE NOTICE '_planned_line: %', _planned_line;
RAISE NOTICE '_incr: %', _incr;
-- Does the latest sequence match a planned sequence?
IF _planned_line IS NULL THEN -- No it doesn't
RAISE NOTICE 'Latest sequence shot does not match a planned sequence';
SELECT * INTO _planned_line FROM planned_lines ORDER BY sequence ASC LIMIT 1;
RAISE NOTICE '_planned_line: %', _planned_line;
IF _planned_line.sequence <= _last_sequence.sequence THEN
RAISE NOTICE 'Renumbering the planned sequences starting from %', _planned_line.sequence + 1;
-- Renumber the planned sequences starting from last shot sequence number + 1
UPDATE planned_lines
SET sequence = sequence + _last_sequence.sequence - _planned_line.sequence + 1;
END IF;
-- The correction to make to the first planned line's ts0 will be based on either the last
-- sequence's EOL + default line change time or the current time, whichever is later.
_deltatime := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1) + make_interval(mins => (_planner_config->>'defaultLineChangeDuration')::integer), current_timestamp) - _planned_line.ts0;
-- Is the first of the planned lines start time in the past? (±5 mins)
IF _planned_line.ts0 < (current_timestamp - make_interval(mins => 5)) THEN
RAISE NOTICE 'First planned line is in the past. Adjusting times by %', _deltatime;
-- Adjust the start / end time of the planned lines by assuming that we are at
-- `defaultLineChangeDuration` minutes away from SOL of the first planned line.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime;
END IF;
ELSE -- Yes it does
RAISE NOTICE 'Latest sequence does match a planned sequence: %, %', _planned_line.sequence, _planned_line.line;
-- Is it online?
IF EXISTS(SELECT 1 FROM raw_lines_files WHERE sequence = _last_sequence.sequence AND hash = '*online*') THEN
-- Yes it is
RAISE NOTICE 'Sequence % is online', _last_sequence.sequence;
-- Let us get the SOL from the events log if we can
RAISE NOTICE 'Trying to set fsp, ts0 from events log FSP, FGSP';
WITH e AS (
SELECT * FROM event_log
WHERE
sequence = _last_sequence.sequence
AND ('FSP' = ANY(labels) OR 'FGSP' = ANY(labels))
ORDER BY tstamp LIMIT 1
)
UPDATE planned_lines
SET
fsp = COALESCE(e.point, fsp),
ts0 = COALESCE(e.tstamp, ts0)
FROM e
WHERE planned_lines.sequence = _last_sequence.sequence;
-- Shot interval
_shotinterval := (_last_sequence.ts1 - _last_sequence.ts0) / abs(_last_sequence.lsp - _last_sequence.fsp);
RAISE NOTICE 'Estimating EOL from current shot interval: %', _shotinterval;
SELECT (abs(lsp-fsp) * _shotinterval + ts0) - ts1
INTO _deltatime
FROM planned_lines
WHERE sequence = _last_sequence.sequence;
---- Set ts1 for the current sequence
--UPDATE planned_lines
--SET
--ts1 = (abs(lsp-fsp) * _shotinterval) + ts0
--WHERE sequence = _last_sequence.sequence;
RAISE NOTICE 'Adjustment is %', _deltatime;
IF abs(EXTRACT(EPOCH FROM _deltatime)) < 8 THEN
RAISE NOTICE 'Adjustment too small (< 8 s), so not applying it';
RETURN;
END IF;
-- Adjust ts1 for the current sequence
UPDATE planned_lines
SET ts1 = ts1 + _deltatime
WHERE sequence = _last_sequence.sequence;
-- Now shift all sequences after
UPDATE planned_lines
SET ts0 = ts0 + _deltatime, ts1 = ts1 + _deltatime
WHERE sequence > _last_sequence.sequence;
RAISE NOTICE 'Deleting planned sequences before %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence < _last_sequence.sequence;
ELSE
-- No it isn't
RAISE NOTICE 'Sequence % is offline', _last_sequence.sequence;
-- We were supposed to finish at _planned_line.ts1 but we finished at:
_tstamp := GREATEST(COALESCE(_last_sequence.ts1_final, _last_sequence.ts1), current_timestamp);
-- WARNING Next line is for testing only
--_tstamp := COALESCE(_last_sequence.ts1_final, _last_sequence.ts1);
-- So we need to adjust timestamps by:
_deltatime := _tstamp - _planned_line.ts1;
RAISE NOTICE 'Planned end: %, actual end: % (%, %)', _planned_line.ts1, _tstamp, _planned_line.sequence, _last_sequence.sequence;
RAISE NOTICE 'Shifting times by % for sequences > %', _deltatime, _planned_line.sequence;
-- NOTE: This won't work if sequences are not, err… sequential.
-- NOTE: This has been known to happen in 2020.
UPDATE planned_lines
SET
ts0 = ts0 + _deltatime,
ts1 = ts1 + _deltatime
WHERE sequence > _planned_line.sequence;
RAISE NOTICE 'Deleting planned sequences up to %', _planned_line.sequence;
-- Remove all previous planner entries.
DELETE
FROM planned_lines
WHERE sequence <= _last_sequence.sequence;
END IF;
END IF;
END;
$$;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.1' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.0' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.1"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.1"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,98 @@
-- Fix wrong number of missing shots in summary views
--
-- New schema version: 0.4.2
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This modifies binning_parameters() to use project_configuration()
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION binning_parameters() RETURNS jsonb
LANGUAGE sql STABLE LEAKPROOF PARALLEL SAFE
AS $$
SELECT project_configuration()->'binning' binning;
$$;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.2' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.1' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.2"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.2"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,164 @@
-- Support notification payloads larger than Postgres' NOTIFY limit.
--
-- New schema version: 0.4.3
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects the public schema only.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This creates a new table where large notification payloads are stored
-- temporarily and from which they might be recalled by the notification
-- listeners. It also creates a purge_notifications() procedure used to
-- clean up old notifications from the notifications log and finally,
-- modifies notify() to support these changes. When a large payload is
-- encountered, the payload is stored in the notify_payloads table and
-- a trimmed down version containing a notification_id is sent to listeners
-- instead. Listeners can then query notify_payloads to retrieve the full
-- payloads. It is the application layer's responsibility to delete old
-- notifications.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_schema () AS $outer$
BEGIN
RAISE NOTICE 'Updating public schema';
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO public');
CREATE TABLE IF NOT EXISTS public.notify_payloads (
id SERIAL,
tstamp timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
payload text NOT NULL DEFAULT '',
PRIMARY KEY (id)
);
CREATE INDEX IF NOT EXISTS notify_payload_tstamp ON notify_payloads (tstamp);
CREATE OR REPLACE FUNCTION public.notify() RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
channel text := TG_ARGV[0];
pid text;
payload text;
notification text;
payload_id integer;
BEGIN
SELECT projects.pid INTO pid FROM projects WHERE schema = TG_TABLE_SCHEMA;
payload := json_build_object(
'tstamp', CURRENT_TIMESTAMP,
'operation', TG_OP,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'old', row_to_json(OLD),
'new', row_to_json(NEW),
'pid', pid
)::text;
IF octet_length(payload) < 1000 THEN
PERFORM pg_notify(channel, payload);
ELSE
-- We need to find another solution
-- FIXME Consider storing the payload in a temporary memory table,
-- referenced by some form of autogenerated ID. Then send the ID
-- as the payload and then it's up to the user to fetch the original
-- payload if interested. This needs a mechanism to expire older payloads
-- in the interest of conserving memory.
INSERT INTO notify_payloads (payload) VALUES (payload) RETURNING id INTO payload_id;
notification := json_build_object(
'tstamp', CURRENT_TIMESTAMP,
'operation', TG_OP,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'pid', pid,
'payload_id', payload_id
)::text;
PERFORM pg_notify(channel, notification);
RAISE INFO 'Payload over limit';
END IF;
RETURN NULL;
END;
$$;
CREATE PROCEDURE public.purge_notifications (age_seconds numeric DEFAULT 120) AS $$
DELETE FROM notify_payloads WHERE EXTRACT(epoch FROM CURRENT_TIMESTAMP - tstamp) > age_seconds;
$$ LANGUAGE sql;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.3' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.2' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
-- This upgrade modified the `public` schema only, not individual
-- project schemas.
CALL pg_temp.upgrade_schema();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_schema ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.3"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.3"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,104 @@
-- Add event_log_changes function
--
-- New schema version: 0.4.4
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adds a function event_log_changes which returns the subset of
-- events from event_log_full which have been modified on or after a
-- given timestamp.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE FUNCTION event_log_changes(ts0 timestamptz)
RETURNS SETOF event_log_full
LANGUAGE sql
AS $$
SELECT *
FROM event_log_full
WHERE lower(validity) > ts0 OR upper(validity) IS NOT NULL AND upper(validity) > ts0
ORDER BY lower(validity);
$$;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.4' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.3' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.4"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.4"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,147 @@
-- Turn project_summary into a materialised view
--
-- New schema version: 0.4.5
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- The project_summary view is quite a bottleneck. While it itself is
-- not the real culprit (rather the underlying views are), this is one
-- relatively cheap way of improving responsiveness from the client's
-- point of view.
-- We leave the details of how / when to refresh the view to the non-
-- database code.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
DROP VIEW project_summary;
CREATE MATERIALIZED VIEW project_summary AS
WITH fls AS (
SELECT
avg((final_lines_summary.duration / ((final_lines_summary.num_points - 1))::double precision)) AS shooting_rate,
avg((final_lines_summary.length / date_part('epoch'::text, final_lines_summary.duration))) AS speed,
sum(final_lines_summary.duration) AS prod_duration,
sum(final_lines_summary.length) AS prod_distance
FROM final_lines_summary
), project AS (
SELECT
p.pid,
p.name,
p.schema
FROM public.projects p
WHERE (split_part(current_setting('search_path'::text), ','::text, 1) = p.schema)
)
SELECT
project.pid,
project.name,
project.schema,
( SELECT count(*) AS count
FROM preplot_lines
WHERE (preplot_lines.class = 'V'::bpchar)) AS lines,
ps.total,
ps.virgin,
ps.prime,
ps.other,
ps.ntba,
ps.remaining,
( SELECT to_json(fs.*) AS to_json
FROM final_shots fs
ORDER BY fs.tstamp
LIMIT 1) AS fsp,
( SELECT to_json(fs.*) AS to_json
FROM final_shots fs
ORDER BY fs.tstamp DESC
LIMIT 1) AS lsp,
( SELECT count(*) AS count
FROM raw_lines rl) AS seq_raw,
( SELECT count(*) AS count
FROM final_lines rl) AS seq_final,
fls.prod_duration,
fls.prod_distance,
fls.speed AS shooting_rate
FROM preplot_summary ps,
fls,
project;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.4.5' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.4' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.4.5"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.4.5"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,164 @@
-- Sailline ancillary data
--
-- New schema version: 0.5.0
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- Issue #264 calls for associating sail and acquisition lines as well
-- as indicating expected acquisition direction, and other data which
-- cannot be provided via standard import formats such as SPS or P1/90.
--
-- We support this via an additional table that holds most of the required
-- data. This data can simply be inferred from regular preplots, e.g., line
-- direction can be deduced from preplot point order, and sail / source
-- line offsets can be taken from P1/90 headers or from a configuration
-- parameter. Alternatively, and in preference, the data can be provided
-- explicitly, which is what issue #264 asks for.
--
-- In principle, this makes at least some of the attributes of `preplot_lines`
-- redundant (at least `incr` and `ntba`) but we will leave them there for
-- the time being as technical debt.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE TABLE IF NOT EXISTS preplot_saillines
(
sailline integer NOT NULL,
line integer NOT NULL,
sailline_class character(1) NOT NULL,
line_class character(1) NOT NULL,
incr boolean NOT NULL DEFAULT true,
ntba boolean NOT NULL DEFAULT false,
remarks text NOT NULL DEFAULT '',
meta jsonb NOT NULL DEFAULT '{}'::jsonb,
hash text NULL, -- Theoretically the info in this table could all be inferred.
PRIMARY KEY (sailline, sailline_class, line, line_class, incr),
CONSTRAINT fk_sailline FOREIGN KEY (sailline, sailline_class)
REFERENCES preplot_lines (line, class)
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT fk_line FOREIGN KEY (line, line_class)
REFERENCES preplot_lines (line, class)
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT fk_hash FOREIGN KEY (hash)
REFERENCES files (hash) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE,
CHECK (sailline_class = 'V' AND sailline_class != line_class)
);
COMMENT ON TABLE preplot_saillines
IS 'We explicitly associate each preplot sailline (aka vessel line) with zero or more source lines. This information can be inferred from preplot files, e.g., via a sailline offset value, or explicitly provided.';
-- Let us copy whatever information we can from existing tables or views
INSERT INTO preplot_saillines
(sailline, line, sailline_class, line_class, incr, ntba, remarks, meta)
SELECT DISTINCT
sailline, psp.line, 'V' sailline_class, psp.class line_class, pl.incr, pl.ntba, pl.remarks, pl.meta
FROM preplot_saillines_points psp
INNER JOIN preplot_lines pl ON psp.sailline = pl.line AND pl.class = 'V'
ORDER BY sailline
ON CONFLICT DO NOTHING;
-- We need to recreate the preplot_saillines_points view
CREATE OR REPLACE VIEW preplot_saillines_points AS
SELECT psl.sailline,
psl.ntba AS sailline_ntba,
psl.line,
pps.point,
pps.class,
pps.ntba,
pps.geometry,
pps.meta
FROM preplot_saillines psl
INNER JOIN preplot_points pps
ON psl.line = pps.line AND psl.line_class = pps.class;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.5.0' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.4.5' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.5.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.5.0"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,119 @@
-- Sailline ancillary data
--
-- New schema version: 0.5.1
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- The sequences_detail view wrongly associates source lines and shot
-- points when it should be associating saillines and shot points instead.
--
-- This updates fixes that issue (#307).
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW sequences_detail
AS
SELECT rl.sequence,
rl.line AS sailline,
rs.line,
rs.point,
rs.tstamp,
rs.objref AS objrefraw,
fs.objref AS objreffinal,
st_transform(pp.geometry, 4326) AS geometrypreplot,
st_transform(rs.geometry, 4326) AS geometryraw,
st_transform(fs.geometry, 4326) AS geometryfinal,
ij_error(rs.line::double precision, rs.point::double precision, rs.geometry) AS errorraw,
ij_error(rs.line::double precision, rs.point::double precision, fs.geometry) AS errorfinal,
json_build_object('preplot', pp.meta, 'raw', rs.meta, 'final', fs.meta) AS meta
FROM raw_lines rl
INNER JOIN preplot_saillines psl ON rl.line = psl.sailline
INNER JOIN raw_shots rs ON rs.sequence = rl.sequence AND rs.line = psl.line
INNER JOIN preplot_points pp ON psl.line = pp.line AND psl.line_class = pp.class AND rs.point = pp.point
LEFT JOIN final_shots fs ON rl.sequence = fs.sequence AND rs.point = fs.point;
ALTER TABLE sequences_detail
OWNER TO postgres;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.5.1' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.5.0' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.5.1"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.5.1"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,145 @@
-- Fix preplot_lines_summary view
--
-- New schema version: 0.5.2
--
-- WARNING: This update is buggy and does not give the desired
-- results. Schema version 0.5.4 fixes this.
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- Following introduction of `preplot_saillines` (0.5.0), the incr and
-- ntba statuses are stored in a separate table, not in `preplot_lines`
-- (TODO: a future upgrade should remove those columns from `preplot_lines`)
--
-- Now any views referencing `incr` and `ntba` must be updated to point to
-- the new location of those attributes.
--
-- This update fixes #312.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW preplot_lines_summary
AS
WITH summary AS (
SELECT DISTINCT pp.line, pp.class,
first_value(pp.point) OVER w AS p0,
last_value(pp.point) OVER w AS p1,
count(pp.point) OVER w AS num_points,
st_distance(first_value(pp.geometry) OVER w, last_value(pp.geometry) OVER w) AS length,
st_azimuth(first_value(pp.geometry) OVER w, last_value(pp.geometry) OVER w) * 180::double precision / pi() AS azimuth0,
st_azimuth(last_value(pp.geometry) OVER w, first_value(pp.geometry) OVER w) * 180::double precision / pi() AS azimuth1
FROM preplot_points pp
WHERE pp.class = 'V'::bpchar
WINDOW w AS (PARTITION BY pp.line ORDER BY pp.point ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT psl.line,
CASE
WHEN psl.incr THEN s.p0
ELSE s.p1
END AS fsp,
CASE
WHEN psl.incr THEN s.p1
ELSE s.p0
END AS lsp,
s.num_points,
s.length,
CASE
WHEN psl.incr THEN s.azimuth0
ELSE s.azimuth1
END AS azimuth,
psl.incr,
psl.remarks
FROM summary s
JOIN preplot_saillines psl ON psl.sailline_class = s.class AND s.line = psl.line
ORDER BY psl.line, incr;
ALTER TABLE preplot_lines_summary
OWNER TO postgres;
COMMENT ON VIEW preplot_lines_summary
IS 'Summarises ''V'' (vessel sailline) preplot lines.';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.5.2' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.5.1' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.5.2"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.5.2"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,132 @@
-- Fix final_lines_summary view
--
-- New schema version: 0.5.3
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This fixes a long-standing bug, where if the sail and source lines are
-- the same, the number of missing shots will be miscounted.
--
-- This update fixes #313.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW final_lines_summary
AS
WITH summary AS (
SELECT DISTINCT fs.sequence,
first_value(fs.point) OVER w AS fsp,
last_value(fs.point) OVER w AS lsp,
first_value(fs.tstamp) OVER w AS ts0,
last_value(fs.tstamp) OVER w AS ts1,
count(fs.point) OVER w AS num_points,
count(pp.point) OVER w AS num_preplots,
st_distance(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) AS length,
st_azimuth(first_value(fs.geometry) OVER w, last_value(fs.geometry) OVER w) * 180::double precision / pi() AS azimuth
FROM final_shots fs
LEFT JOIN preplot_points pp USING (line, point)
WINDOW w AS (PARTITION BY fs.sequence ORDER BY fs.tstamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT fl.sequence,
fl.line,
s.fsp,
s.lsp,
s.ts0,
s.ts1,
s.ts1 - s.ts0 AS duration,
s.num_points,
(( SELECT count(*) AS count
FROM preplot_points
WHERE preplot_points.line = fl.line AND (preplot_points.point >= s.fsp AND preplot_points.point <= s.lsp OR preplot_points.point >= s.lsp AND preplot_points.point <= s.fsp))) - s.num_preplots AS missing_shots,
s.length,
s.azimuth,
fl.remarks,
fl.meta
FROM summary s
JOIN final_lines fl USING (sequence);
ALTER TABLE final_lines_summary
OWNER TO postgres;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.5.3' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.5.2' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.5.3"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.5.3"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,145 @@
-- Fix preplot_lines_summary view
--
-- New schema version: 0.5.4
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects all schemas in the database.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- Fixes upgrade 35 (0.5.2). The original description of 0.5.2 is included
-- below for ease of reference:
--
-- Following introduction of `preplot_saillines` (0.5.0), the incr and
-- ntba statuses are stored in a separate table, not in `preplot_lines`
-- (TODO: a future upgrade should remove those columns from `preplot_lines`)
--
-- Now any views referencing `incr` and `ntba` must be updated to point to
-- the new location of those attributes.
--
-- This update fixes #312.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_survey_schema (schema_name text) AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', schema_name;
-- We need to set the search path because some of the trigger
-- functions reference other tables in survey schemas assuming
-- they are in the search path.
EXECUTE format('SET search_path TO %I,public', schema_name);
CREATE OR REPLACE VIEW preplot_lines_summary
AS
WITH summary AS (
SELECT DISTINCT pp.line,
pp.class,
first_value(pp.point) OVER w AS p0,
last_value(pp.point) OVER w AS p1,
count(pp.point) OVER w AS num_points,
st_distance(first_value(pp.geometry) OVER w, last_value(pp.geometry) OVER w) AS length,
st_azimuth(first_value(pp.geometry) OVER w, last_value(pp.geometry) OVER w) * 180::double precision / pi() AS azimuth0,
st_azimuth(last_value(pp.geometry) OVER w, first_value(pp.geometry) OVER w) * 180::double precision / pi() AS azimuth1
FROM preplot_points pp
WHERE pp.class = 'V'::bpchar
WINDOW w AS (PARTITION BY pp.line ORDER BY pp.point ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
SELECT DISTINCT psl.sailline AS line,
CASE
WHEN psl.incr THEN s.p0
ELSE s.p1
END AS fsp,
CASE
WHEN psl.incr THEN s.p1
ELSE s.p0
END AS lsp,
s.num_points,
s.length,
CASE
WHEN psl.incr THEN s.azimuth0
ELSE s.azimuth1
END AS azimuth,
psl.incr,
psl.remarks
FROM summary s
JOIN preplot_saillines psl ON psl.sailline_class = s.class AND s.line = psl.sailline
ORDER BY psl.sailline, psl.incr;
ALTER TABLE preplot_lines_summary
OWNER TO postgres;
COMMENT ON VIEW preplot_lines_summary
IS 'Summarises ''V'' (vessel sailline) preplot lines.';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.5.4' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.5.3' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
FOR row IN
SELECT schema_name FROM information_schema.schemata
WHERE schema_name LIKE 'survey_%'
ORDER BY schema_name
LOOP
CALL pg_temp.upgrade_survey_schema(row.schema_name);
END LOOP;
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_survey_schema (schema_name text);
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.5.4"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.5.4"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,110 @@
-- Fix final_lines_summary view
--
-- New schema version: 0.6.0
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade only affects the `public` schema.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update adds a `keystore` table, intended for storing arbitrary
-- key / value pairs which, unlike, the `info` tables, is not meant to
-- be directly accessible via the API. Its main purpose as of this writing
-- is to store user definitions (see #176, #177, #180).
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
CREATE TABLE IF NOT EXISTS keystore (
type TEXT NOT NULL, -- A class of data to be stored
key TEXT NOT NULL, -- A key that is unique for the class and access type
last_modified TIMESTAMP -- To detect update conflicts
DEFAULT CURRENT_TIMESTAMP,
data jsonb,
PRIMARY KEY (type, key) -- Composite primary key
);
-- Create a function to update the last_modified timestamp
CREATE OR REPLACE FUNCTION update_last_modified()
RETURNS TRIGGER AS $$
BEGIN
NEW.last_modified = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Create a trigger that calls the function before each update
CREATE OR REPLACE TRIGGER update_keystore_last_modified
BEFORE UPDATE ON keystore
FOR EACH ROW
EXECUTE FUNCTION update_last_modified();
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.0' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.5.4' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.0"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.0"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,108 @@
-- Fix final_lines_summary view
--
-- New schema version: 0.6.1
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade only affects the `public` schema.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update adds a default user to the system (see #176, #177, #180).
-- The default user can only be invoked by connecting from localhost.
--
-- This user has full access to every project via the organisations
-- permissions wildcard: `{"*": {read: true, write: true, edit: true}}`
-- and can be used to bootstrap the system by creating other users
-- and assigning organisational permissions.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
INSERT INTO keystore (type, key, data)
VALUES ('user', '6f1e7159-4ca0-4ae4-ab4e-89078166cc10', '
{
"id": "6f1e7159-4ca0-4ae4-ab4e-89078166cc10",
"ip": "127.0.0.0/24",
"name": "☠️",
"colour": "red",
"active": true,
"organisations": {
"*": {
"read": true,
"write": true,
"edit": true
}
}
}
'::jsonb)
ON CONFLICT (type, key) DO NOTHING;
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.1' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.0' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.1"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.1"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,106 @@
-- Fix final_lines_summary view
--
-- New schema version: 0.6.2
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade only affects the `public` schema.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update adds an "organisations" section to the configuration,
-- with a default configured organisation of "WGP" with full access.
-- This is so that projects can be made accessible after migrating
-- to the new permissions architecture.
--
-- In addition, projects with an id starting with "eq" are assumed to
-- be Equinor projects, and an additional organisation is added with
-- read-only access. This is intended for clients, which should be
-- assigned to the "Equinor organisation".
--
-- Finally, we assign the vessel to the "WGP" organisation (full access)
-- so that we can actually use administrative endpoints.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
-- Add "organisations" section to configurations, if not already present
UPDATE projects
SET
meta = jsonb_set(meta, '{organisations}', '{"WGP": {"read": true, "write": true, "edit": true}}'::jsonb, true)
WHERE meta->'organisations' IS NULL;
-- Add (or overwrite!) "organisations.Equinor" giving read-only access (can be changed later via API)
UPDATE projects
SET
meta = jsonb_set(meta, '{organisations, Equinor}', '{"read": true, "write": false, "edit": false}'::jsonb, true)
WHERE pid LIKE 'eq%';
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.2' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.1' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.2"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.2"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,109 @@
-- Add procedure to decimate old nav data
--
-- New schema version: 0.6.3
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade creates a new schema called `comparisons`.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update adds a `comparisons` table to a `comparisons` schema.
-- The `comparisons.comparisons` table holds 4D prospect comparison data.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
-- BEGIN
CREATE SCHEMA IF NOT EXISTS comparisons
AUTHORIZATION postgres;
COMMENT ON SCHEMA comparisons
IS 'Holds 4D comparison data and logic';
CREATE TABLE IF NOT EXISTS comparisons.comparisons
(
type text COLLATE pg_catalog."default" NOT NULL,
baseline_pid text COLLATE pg_catalog."default" NOT NULL,
monitor_pid text COLLATE pg_catalog."default" NOT NULL,
data bytea,
meta jsonb NOT NULL DEFAULT '{}'::jsonb,
CONSTRAINT comparisons_pkey PRIMARY KEY (baseline_pid, monitor_pid, type)
)
TABLESPACE pg_default;
ALTER TABLE IF EXISTS comparisons.comparisons
OWNER to postgres;
-- END
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.3' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.2' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.3"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.3"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,169 @@
-- Add procedure to decimate old nav data
--
-- New schema version: 0.6.4
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects the public schema only.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update modifies notify() to accept, as optional arguments, the
-- names of columns that are to be *excluded* from the notification.
-- It is intended for tables with large columns which are however of
-- no particular interest in a notification.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
-- BEGIN
CREATE OR REPLACE FUNCTION public.notify()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE
channel text := TG_ARGV[0];
pid text;
payload text;
notification text;
payload_id integer;
old_json jsonb;
new_json jsonb;
excluded_col text;
i integer;
BEGIN
-- Fetch pid
SELECT projects.pid INTO pid FROM projects WHERE schema = TG_TABLE_SCHEMA;
-- Build old and new as jsonb, excluding specified columns if provided
IF OLD IS NOT NULL THEN
old_json := row_to_json(OLD)::jsonb;
FOR i IN 1 .. TG_NARGS - 1 LOOP
excluded_col := TG_ARGV[i];
old_json := old_json - excluded_col;
END LOOP;
ELSE
old_json := NULL;
END IF;
IF NEW IS NOT NULL THEN
new_json := row_to_json(NEW)::jsonb;
FOR i IN 1 .. TG_NARGS - 1 LOOP
excluded_col := TG_ARGV[i];
new_json := new_json - excluded_col;
END LOOP;
ELSE
new_json := NULL;
END IF;
-- Build payload
payload := json_build_object(
'tstamp', CURRENT_TIMESTAMP,
'operation', TG_OP,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'old', old_json,
'new', new_json,
'pid', pid
)::text;
-- Handle large payloads
IF octet_length(payload) < 1000 THEN
PERFORM pg_notify(channel, payload);
ELSE
-- Store large payload and notify with ID (as before)
INSERT INTO notify_payloads (payload) VALUES (payload) RETURNING id INTO payload_id;
notification := json_build_object(
'tstamp', CURRENT_TIMESTAMP,
'operation', TG_OP,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'pid', pid,
'payload_id', payload_id
)::text;
PERFORM pg_notify(channel, notification);
RAISE INFO 'Payload over limit';
END IF;
RETURN NULL;
END;
$BODY$;
ALTER FUNCTION public.notify()
OWNER TO postgres;
-- END
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.4' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.3' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.4"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.4"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,96 @@
-- Add procedure to decimate old nav data
--
-- New schema version: 0.6.5
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects the public schema only.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This update modifies notify() to accept, as optional arguments, the
-- names of columns that are to be *excluded* from the notification.
-- It is intended for tables with large columns which are however of
-- no particular interest in a notification.
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
-- BEGIN
CREATE OR REPLACE TRIGGER comparisons_tg
AFTER INSERT OR DELETE OR UPDATE
ON comparisons.comparisons
FOR EACH ROW
EXECUTE FUNCTION public.notify('comparisons', 'data');
-- END
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.5' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.4' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.5"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.5"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

View File

@@ -0,0 +1,157 @@
-- Add procedure to decimate old nav data
--
-- New schema version: 0.6.6
--
-- ATTENTION:
--
-- ENSURE YOU HAVE BACKED UP THE DATABASE BEFORE RUNNING THIS SCRIPT.
--
--
-- NOTE: This upgrade affects the public schema only.
-- NOTE: Each application starts a transaction, which must be committed
-- or rolled back.
--
-- This adds a last_project_update(pid) function. It takes a project ID
-- and returns the last known timestamp from that project. Timestamps
-- are derived from multiple sources:
--
-- - raw_shots table
-- - final_shots table
-- - events_log_full table
-- - info table where key = 'qc'
-- - files table, from the hashes (which contain the file's mtime)
-- - project configuration, looking for an _updatedOn property
--
-- To apply, run as the dougal user:
--
-- psql <<EOF
-- \i $THIS_FILE
-- COMMIT;
-- EOF
--
-- NOTE: It can be applied multiple times without ill effect.
--
BEGIN;
CREATE OR REPLACE PROCEDURE pg_temp.show_notice (notice text) AS $$
BEGIN
RAISE NOTICE '%', notice;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade_database () AS $outer$
BEGIN
RAISE NOTICE 'Updating schema %', 'public';
SET search_path TO public;
-- BEGIN
CREATE OR REPLACE FUNCTION public.last_project_update(p_pid text)
RETURNS timestamp with time zone
LANGUAGE plpgsql
AS $function$
DECLARE
v_last_ts timestamptz := NULL;
v_current_ts timestamptz;
v_current_str text;
v_current_unix numeric;
v_sid_rec record;
BEGIN
-- From raw_shots, final_shots, info, and files
FOR v_sid_rec IN SELECT schema FROM public.projects WHERE pid = p_pid
LOOP
-- From raw_shots
EXECUTE 'SELECT max(tstamp) FROM ' || v_sid_rec.schema || '.raw_shots' INTO v_current_ts;
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
-- From final_shots
EXECUTE 'SELECT max(tstamp) FROM ' || v_sid_rec.schema || '.final_shots' INTO v_current_ts;
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
-- From info where key = 'qc'
EXECUTE 'SELECT value->>''updatedOn'' FROM ' || v_sid_rec.schema || '.info WHERE key = ''qc''' INTO v_current_str;
IF v_current_str IS NOT NULL THEN
v_current_ts := v_current_str::timestamptz;
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
END IF;
-- From files hash second part, only for valid colon-separated hashes
EXECUTE 'SELECT max( split_part(hash, '':'', 2)::numeric ) FROM ' || v_sid_rec.schema || '.files WHERE hash ~ ''^[0-9]+:[0-9]+\\.[0-9]+:[0-9]+\\.[0-9]+:[0-9a-f]+$''' INTO v_current_unix;
IF v_current_unix IS NOT NULL THEN
v_current_ts := to_timestamp(v_current_unix);
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
END IF;
-- From event_log_full
EXECUTE 'SELECT max(tstamp) FROM ' || v_sid_rec.schema || '.event_log_full' INTO v_current_ts;
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
END LOOP;
-- From projects.meta->_updatedOn
SELECT (meta->>'_updatedOn')::timestamptz FROM public.projects WHERE pid = p_pid INTO v_current_ts;
IF v_current_ts > v_last_ts OR v_last_ts IS NULL THEN
v_last_ts := v_current_ts;
END IF;
RETURN v_last_ts;
END;
$function$;
-- END
END;
$outer$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE pg_temp.upgrade () AS $outer$
DECLARE
row RECORD;
current_db_version TEXT;
BEGIN
SELECT value->>'db_schema' INTO current_db_version FROM public.info WHERE key = 'version';
IF current_db_version >= '0.6.6' THEN
RAISE EXCEPTION
USING MESSAGE='Patch already applied';
END IF;
IF current_db_version != '0.6.5' THEN
RAISE EXCEPTION
USING MESSAGE='Invalid database version: ' || current_db_version,
HINT='Ensure all previous patches have been applied.';
END IF;
CALL pg_temp.upgrade_database();
END;
$outer$ LANGUAGE plpgsql;
CALL pg_temp.upgrade();
CALL pg_temp.show_notice('Cleaning up');
DROP PROCEDURE pg_temp.upgrade_database ();
DROP PROCEDURE pg_temp.upgrade ();
CALL pg_temp.show_notice('Updating db_schema version');
INSERT INTO public.info VALUES ('version', '{"db_schema": "0.6.6"}')
ON CONFLICT (key) DO UPDATE
SET value = public.info.value || '{"db_schema": "0.6.6"}' WHERE public.info.key = 'version';
CALL pg_temp.show_notice('All done. You may now run "COMMIT;" to persist the changes');
DROP PROCEDURE pg_temp.show_notice (notice text);
--
--NOTE Run `COMMIT;` now if all went well
--

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -7,14 +7,20 @@
id: missing_shots
check: |
const sequence = currentItem;
const sp0 = Math.min(sequence.fsp, sequence.lsp);
const sp1 = Math.max(sequence.fsp, sequence.lsp);
const missing = preplots.filter(r => r.line == sequence.line &&
r.point >= sp0 && r.point <= sp1 &&
!sequence.shots.find(s => s.point == r.point)
);
let results;
if (sequence.missing_shots) {
results = {
shots: {}
}
const missing_shots = missingShotpoints.filter(i => !i.ntba);
for (const shot of missing_shots) {
results.shots[shot.point] = { remarks: "Missed shot", labels: [ "QC", "QCAcq" ] };
}
} else {
results = true;
}
missing.length == 0 || missing.map(r => `Missing shot: ${r.point}`).join("\n")
results;
-
name: "Gun QC"
disabled: false
@@ -25,25 +31,27 @@
iterate: "sequences"
id: seq_no_gun_data
check: |
const sequence = currentItem;
currentItem.has_smsrc_data || "Sequence has no gun data"
shotpoints.some(i => i.meta?.raw?.smsrc) || "Sequence has no gun data"
-
name: "Missing gun data"
id: missing_gun_data
ignoreAllFailed: true
check: |
sequences.some(s => s.sequence == currentItem.sequence && s.has_smsrc_data)
? (!!currentItem._("raw_meta.smsrc.guns") || "Missing gun data")
: true
!!currentItem._("raw_meta.smsrc.guns")
? true
: "Missing gun data"
-
name: "No fire"
id: no_fire
check: |
const currentShot = currentItem;
const gunData = currentItem._("raw_meta.smsrc");
(gunData && gunData.num_nofire != 0)
? `Source ${gunData.src_number}: No fire (${gunData.num_nofire} guns)`
: true;
// const currentShot = currentItem;
// const gunData = currentItem._("raw_meta.smsrc");
// (gunData && gunData.guns && gunData.guns.length != gunData.num_active)
// ? `Source ${gunData.src_number}: No fire (${gunData.guns.length - gunData.num_active} guns)`
// : true;
// Disabled due to changes in Smartsource software. It now returns all guns on every shot, not just active ones.
true
-
name: "Pressure errors"
@@ -56,8 +64,8 @@
.guns
.filter(gun => ((gun[2] == gunData.src_number) && (gun[pressure]/parameters.gunPressureNominal - 1) > parameters.gunPressureToleranceRatio))
.map(gun =>
`source ${gun[2]}, string ${gun[0]}, gun ${gun[1]}, pressure: ${gun[pressure]} / ${parameters.gunPressureNominal} = ${(Math.abs(gunData.manifold/parameters.gunPressureNominal - 1)*100).toFixed(1)}% > ${(parameters.gunPressureToleranceRatio*100).toFixed(1)}%`
);
`source ${gun[2]}, string ${gun[0]}, gun ${gun[1]}, pressure: ${gun[pressure]} / ${parameters.gunPressureNominal} = ${(Math.abs(gun[pressure]/parameters.gunPressureNominal - 1)*100).toFixed(2)}% > ${(parameters.gunPressureToleranceRatio*100).toFixed(2)}%`
).join(" \n");
results && results.length
? results
: true
@@ -159,7 +167,7 @@
.filter(gun => Math.abs(gun[firetime]-gun[aimpoint]) >= parameters.gunTimingWarning && Math.abs(gun[firetime]-gun[aimpoint]) <= parameters.gunTiming)
.forEach(gun => {
const value = Math.abs(gun[firetime]-gun[aimpoint]);
result.push(`Delta error: source ${gun[2]}, string ${gun[0]}, gun ${gun[1]}: ${parameters.gunTimingWarning} ≤ ${value.toFixed(2)} ≤ ${parameters.gunTiming}`);
result.push(`Delta warning: source ${gun[2]}, string ${gun[0]}, gun ${gun[1]}: ${parameters.gunTimingWarning} ≤ ${value.toFixed(2)} ≤ ${parameters.gunTiming}`);
});
}
if (result.length) {
@@ -201,7 +209,7 @@
check: |
const currentShot = currentItem;
Math.abs(currentShot.error_i) <= parameters.crosslineError
|| `Crossline error (${currentShot.type}): ${currentShot.error_i.toFixed(1)} > ${parameters.crosslineError}`
|| `Crossline error (${currentShot.type}): ${currentShot.error_i.toFixed(2)} > ${parameters.crosslineError}`
-
name: "Inline"
@@ -209,7 +217,7 @@
check: |
const currentShot = currentItem;
Math.abs(currentShot.error_j) <= parameters.inlineError
|| `Inline error (${currentShot.type}): ${currentShot.error_j.toFixed(1)} > ${parameters.inlineError}`
|| `Inline error (${currentShot.type}): ${currentShot.error_j.toFixed(2)} > ${parameters.inlineError}`
-
name: "Centre of source preplot deviation (moving average)"
@@ -222,11 +230,16 @@
id: crossline_average
check: |
const currentSequence = currentItem;
const i_err = currentSequence.shots.filter(s => s.error_i != null).map(a => a.error_i);
//const i_err = shotpoints.filter(s => s.error_i != null).map(a => a.error_i);
const i_err = shotpoints.map(i =>
(i.errorfinal?.coordinates ?? i.errorraw?.coordinates)[0]
)
.filter(i => !isNaN(i));
if (i_err.length) {
const avg = i_err.reduce( (a, b) => a+b)/i_err.length;
avg <= parameters.crosslineErrorAverage ||
`Average crossline error: ${avg.toFixed(1)} > ${parameters.crosslineErrorAverage}`
`Average crossline error: ${avg.toFixed(2)} > ${parameters.crosslineErrorAverage}`
} else {
`Sequence ${currentSequence.sequence} has no shots within preplot`
}
@@ -239,16 +252,27 @@
check: |
const currentSequence = currentItem;
const n = parameters.inlineErrorRunningAverageShots; // For brevity
const results = currentSequence.shots.slice(n/2, -n/2).map( (shot, index) => {
const shots = currentSequence.shots.slice(index, index+n).map(i => i.error_j).filter(i => i !== null);
const results = shotpoints.slice(n/2, -n/2).map( (shot, index) => {
const shots = shotpoints.slice(index, index+n).map(i =>
(i.errorfinal?.coordinates ?? i.errorraw?.coordinates)[1]
).filter(i => i !== null);
if (!shots.length) {
// We are outside the preplot
// Nothing to see here, move along
return true;
}
const mean = shots.reduce( (a, b) => a+b ) / shots.length;
return Math.abs(mean) <= parameters.inlineErrorRunningAverageValue ||
`Running average inline error: shot ${shot.point}, ${mean.toFixed(1)} > ${parameters.inlineErrorRunningAverageValue}`
return Math.abs(mean) <= parameters.inlineErrorRunningAverageValue || [
shot.point,
{
remarks: `Running average inline error: ${mean.toFixed(2)} > ${parameters.inlineErrorRunningAverageValue}`,
labels: [ "QC", "QCNav" ]
}
]
}).filter(i => i !== true);
results.length == 0 || results.join("\n");
results.length == 0 || {
remarks: "Sequence exceeds inline error running average limit",
shots: Object.fromEntries(results)
}

3
etc/ssl/README.md Normal file
View File

@@ -0,0 +1,3 @@
# TLS certificates directory
Drop TLS certificates required by Dougal in this directory. It is excluded by [`.gitignore`](../../.gitignore) so its contents should never be committed by accident (and shouldn't be committed on purpose!).

View File

@@ -0,0 +1,968 @@
const codeToType = {
0: Int8Array,
1: Uint8Array,
2: Int16Array,
3: Uint16Array,
4: Int32Array,
5: Uint32Array,
7: Float32Array,
8: Float64Array,
9: BigInt64Array,
10: BigUint64Array
};
const typeToBytes = {
Int8Array: 1,
Uint8Array: 1,
Int16Array: 2,
Uint16Array: 2,
Int32Array: 4,
Uint32Array: 4,
Float32Array: 4,
Float64Array: 8,
BigInt64Array: 8,
BigUint64Array: 8
};
function readTypedValue(view, offset, type) {
switch (type) {
case Int8Array: return view.getInt8(offset);
case Uint8Array: return view.getUint8(offset);
case Int16Array: return view.getInt16(offset, true);
case Uint16Array: return view.getUint16(offset, true);
case Int32Array: return view.getInt32(offset, true);
case Uint32Array: return view.getUint32(offset, true);
case Float32Array: return view.getFloat32(offset, true);
case Float64Array: return view.getFloat64(offset, true);
case BigInt64Array: return view.getBigInt64(offset, true);
case BigUint64Array: return view.getBigUint64(offset, true);
default: throw new Error(`Unsupported type: ${type.name}`);
}
}
function writeTypedValue(view, offset, value, type) {
switch (type) {
case Int8Array: view.setInt8(offset, value); break;
case Uint8Array: view.setUint8(offset, value); break;
case Int16Array: view.setInt16(offset, value, true); break;
case Uint16Array: view.setUint16(offset, value, true); break;
case Int32Array: view.setInt32(offset, value, true); break;
case Uint32Array: view.setUint32(offset, value, true); break;
case Float32Array: view.setFloat32(offset, value, true); break;
case Float64Array: view.setFloat64(offset, value, true); break;
case BigInt64Array: view.setBigInt64(offset, BigInt(value), true); break;
case BigUint64Array: view.setBigUint64(offset, BigInt(value), true); break;
default: throw new Error(`Unsupported type: ${type.name}`);
}
}
class DougalBinaryBundle extends ArrayBuffer {
static HEADER_LENGTH = 4; // Length of a bundle header
/** Clone an existing ByteArray into a DougalBinaryBundle
*/
static clone (buffer) {
const clone = new DougalBinaryBundle(buffer.byteLength);
const uint8Array = new Uint8Array(buffer);
const uint8ArrayClone = new Uint8Array(clone);
uint8ArrayClone.set(uint8Array);
return clone;
}
constructor (length, options) {
super (length, options);
}
/** Get the count of bundles in this ByteArray.
*
* Stops at the first non-bundle looking offset
*/
get bundleCount () {
let count = 0;
let currentBundleOffset = 0;
const view = new DataView(this);
while (currentBundleOffset < this.byteLength) {
const currentBundleHeader = view.getUint32(currentBundleOffset, true);
if ((currentBundleHeader & 0xff) !== 0x1c) {
// This is not a bundle
return count;
}
let currentBundleLength = currentBundleHeader >>> 8;
currentBundleOffset += currentBundleLength + DougalBinaryBundle.HEADER_LENGTH;
count++;
}
return count;
}
/** Get the number of chunks in the bundles of this ByteArray
*/
get chunkCount () {
let count = 0;
let bundleOffset = 0;
const view = new DataView(this);
while (bundleOffset < this.byteLength) {
const header = view.getUint32(bundleOffset, true);
if ((header & 0xFF) !== 0x1C) break;
const length = header >>> 8;
if (bundleOffset + 4 + length > this.byteLength) break;
let chunkOffset = bundleOffset + 4; // relative to buffer start
while (chunkOffset < bundleOffset + 4 + length) {
const chunkType = view.getUint8(chunkOffset);
if (chunkType !== 0x11 && chunkType !== 0x12) break;
const cCount = view.getUint16(chunkOffset + 2, true);
const ΔelemC = view.getUint8(chunkOffset + 10);
const elemC = view.getUint8(chunkOffset + 11);
let localOffset = 12; // header size
localOffset += ΔelemC + elemC; // preface
// initial values
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const baseCode = typeByte & 0xF;
const baseType = codeToType[baseCode];
if (!baseType) throw new Error('Invalid base type code');
localOffset += typeToBytes[baseType.name];
}
// pad after initial
while (localOffset % 4 !== 0) localOffset++;
if (chunkType === 0x11) { // Sequential
// record data: Δelems incrs
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const incrCode = typeByte >> 4;
const incrType = codeToType[incrCode];
if (!incrType) throw new Error('Invalid incr type code');
localOffset += cCount * typeToBytes[incrType.name];
}
// elems
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(chunkOffset + 12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
localOffset += cCount * typeToBytes[type.name];
}
} else { // Interleaved
// Compute exact stride for interleaved record data
let ΔelemStride = 0;
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const incrCode = typeByte >> 4;
const incrType = codeToType[incrCode];
if (!incrType) throw new Error('Invalid incr type code');
ΔelemStride += typeToBytes[incrType.name];
}
let elemStride = 0;
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(chunkOffset + 12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
elemStride += typeToBytes[type.name];
}
const recordStride = ΔelemStride + elemStride;
localOffset += cCount * recordStride;
}
// pad after record
while (localOffset % 4 !== 0) localOffset++;
chunkOffset += localOffset;
count++;
}
bundleOffset += 4 + length;
}
return count;
}
/** Return an array of DougalBinaryChunkSequential or DougalBinaryChunkInterleaved instances
*/
chunks () {
const chunks = [];
let bundleOffset = 0;
const view = new DataView(this);
while (bundleOffset < this.byteLength) {
const header = view.getUint32(bundleOffset, true);
if ((header & 0xFF) !== 0x1C) break;
const length = header >>> 8;
if (bundleOffset + 4 + length > this.byteLength) break;
let chunkOffset = bundleOffset + 4;
while (chunkOffset < bundleOffset + 4 + length) {
const chunkType = view.getUint8(chunkOffset);
if (chunkType !== 0x11 && chunkType !== 0x12) break;
const cCount = view.getUint16(chunkOffset + 2, true);
const ΔelemC = view.getUint8(chunkOffset + 10);
const elemC = view.getUint8(chunkOffset + 11);
let localOffset = 12;
localOffset += ΔelemC + elemC;
// initial values
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const baseCode = typeByte & 0xF;
const baseType = codeToType[baseCode];
if (!baseType) throw new Error('Invalid base type code');
localOffset += typeToBytes[baseType.name];
}
// pad after initial
while (localOffset % 4 !== 0) localOffset++;
if (chunkType === 0x11) { // Sequential
// record data: Δelems incrs
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const incrCode = typeByte >> 4;
const incrType = codeToType[incrCode];
if (!incrType) throw new Error('Invalid incr type code');
localOffset += cCount * typeToBytes[incrType.name];
}
// elems
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(chunkOffset + 12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
localOffset += cCount * typeToBytes[type.name];
}
} else { // Interleaved
// Compute exact stride for interleaved record data
let ΔelemStride = 0;
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(chunkOffset + 12 + k);
const incrCode = typeByte >> 4;
const incrType = codeToType[incrCode];
if (!incrType) throw new Error('Invalid incr type code');
ΔelemStride += typeToBytes[incrType.name];
}
let elemStride = 0;
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(chunkOffset + 12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
elemStride += typeToBytes[type.name];
}
const recordStride = ΔelemStride + elemStride;
localOffset += cCount * recordStride;
}
// pad after record
while (localOffset % 4 !== 0) localOffset++;
switch (chunkType) {
case 0x11:
chunks.push(new DougalBinaryChunkSequential(this, chunkOffset, localOffset));
break;
case 0x12:
chunks.push(new DougalBinaryChunkInterleaved(this, chunkOffset, localOffset));
break;
default:
throw new Error('Invalid chunk type');
}
chunkOffset += localOffset;
}
bundleOffset += 4 + length;
}
return chunks;
}
/** Return a ByteArray containing all data from all
* chunks including reconstructed i, j and incremental
* values as follows:
*
* <i_0> <i_1> … <i_x> // i values (constant)
* <j_0> <j_1> … <j_x> // j values (j0 + Δj*i)
* <Δelem_0_0> <Δelem_0_1> … <Δelem_0_x> // reconstructed Δelem0 (uses baseType)
* <Δelem_1_0> <Δelem_1_1> … <Δelem_1_x> // reconstructed Δelem1
* …
* <Δelem_y_0> <Δelem_y_1> … <Δelem_y_x> // reconstructed Δelem1
* <elem_0_0> <elem_0_1> … <elem_0_x> // First elem
* <elem_1_0> <elem_1_1> … <elem_1_x> // Second elem
* …
* <elem_z_0> <elem_z_1> … <elem_z_x> // Last elem
*
* It does not matter whether the underlying chunks are
* sequential or interleaved. This function will transform
* as necessary.
*
*/
getDataSequentially () {
const chunks = this.chunks();
if (chunks.length === 0) return new ArrayBuffer(0);
const firstChunk = chunks[0];
const ΔelemC = firstChunk.ΔelemCount;
const elemC = firstChunk.elemCount;
// Check consistency across chunks
for (const chunk of chunks) {
if (chunk.ΔelemCount !== ΔelemC || chunk.elemCount !== elemC) {
throw new Error('Inconsistent chunk structures');
}
}
// Get types from first chunk
const view = new DataView(firstChunk);
const ΔelemBaseTypes = [];
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(12 + k);
const baseCode = typeByte & 0xF;
const baseType = codeToType[baseCode];
if (!baseType) throw new Error('Invalid base type code');
ΔelemBaseTypes.push(baseType);
}
const elemTypes = [];
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
elemTypes.push(type);
}
// Compute total records
const totalN = chunks.reduce((sum, c) => sum + c.jCount, 0);
// Compute sizes
const size_i = totalN * 2; // Uint16 for i
const size_j = totalN * 4; // Int32 for j
let size_Δelems = 0;
for (const t of ΔelemBaseTypes) {
size_Δelems += totalN * typeToBytes[t.name];
}
let size_elems = 0;
for (const t of elemTypes) {
size_elems += totalN * typeToBytes[t.name];
}
const totalSize = size_i + size_j + size_Δelems + size_elems;
const ab = new ArrayBuffer(totalSize);
const dv = new DataView(ab);
// Write i's
let off = 0;
for (const chunk of chunks) {
const i = chunk.i;
for (let idx = 0; idx < chunk.jCount; idx++) {
dv.setUint16(off, i, true);
off += 2;
}
}
// Write j's
off = size_i;
for (const chunk of chunks) {
const j0 = chunk.j0;
const Δj = chunk.Δj;
for (let idx = 0; idx < chunk.jCount; idx++) {
const j = j0 + idx * Δj;
dv.setInt32(off, j, true);
off += 4;
}
}
// Write Δelems
off = size_i + size_j;
for (let m = 0; m < ΔelemC; m++) {
const type = ΔelemBaseTypes[m];
const bytes = typeToBytes[type.name];
for (const chunk of chunks) {
const arr = chunk.Δelem(m);
for (let idx = 0; idx < chunk.jCount; idx++) {
writeTypedValue(dv, off, arr[idx], type);
off += bytes;
}
}
}
// Write elems
for (let m = 0; m < elemC; m++) {
const type = elemTypes[m];
const bytes = typeToBytes[type.name];
for (const chunk of chunks) {
const arr = chunk.elem(m);
for (let idx = 0; idx < chunk.jCount; idx++) {
writeTypedValue(dv, off, arr[idx], type);
off += bytes;
}
}
}
return ab;
}
/** Return a ByteArray containing all data from all
* chunks including reconstructed i, j and incremental
* values, interleaved as follows:
*
* <i_0> <j_0> <Δelem_0_0> <Δelem_1_0> … <Δelem_y_0> <elem_0_0> <elem_1_0> … <elem_z_0>
* <i_1> <j_1> <Δelem_0_1> <Δelem_1_1> … <Δelem_y_1> <elem_0_1> <elem_1_1> … <elem_z_1>
* <i_x> <j_x> <Δelem_0_x> <Δelem_1_x> … <Δelem_y_x> <elem_0_x> <elem_1_x> … <elem_z_x>
*
* It does not matter whether the underlying chunks are
* sequential or interleaved. This function will transform
* as necessary.
*
*/
getDataInterleaved () {
const chunks = this.chunks();
if (chunks.length === 0) return new ArrayBuffer(0);
const firstChunk = chunks[0];
const ΔelemC = firstChunk.ΔelemCount;
const elemC = firstChunk.elemCount;
// Check consistency across chunks
for (const chunk of chunks) {
if (chunk.ΔelemCount !== ΔelemC || chunk.elemCount !== elemC) {
throw new Error('Inconsistent chunk structures');
}
}
// Get types from first chunk
const view = new DataView(firstChunk);
const ΔelemBaseTypes = [];
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(12 + k);
const baseCode = typeByte & 0xF;
const baseType = codeToType[baseCode];
if (!baseType) throw new Error('Invalid base type code');
ΔelemBaseTypes.push(baseType);
}
const elemTypes = [];
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
elemTypes.push(type);
}
// Compute total records
const totalN = chunks.reduce((sum, c) => sum + c.jCount, 0);
// Compute record size
const recordSize = 2 + 4 + // i (Uint16) + j (Int32)
ΔelemBaseTypes.reduce((sum, t) => sum + typeToBytes[t.name], 0) +
elemTypes.reduce((sum, t) => sum + typeToBytes[t.name], 0);
const totalSize = totalN * recordSize;
const ab = new ArrayBuffer(totalSize);
const dv = new DataView(ab);
let off = 0;
for (const chunk of chunks) {
const i = chunk.i;
const j0 = chunk.j0;
const Δj = chunk.Δj;
for (let idx = 0; idx < chunk.jCount; idx++) {
dv.setUint16(off, i, true);
off += 2;
const j = j0 + idx * Δj;
dv.setInt32(off, j, true);
off += 4;
for (let m = 0; m < ΔelemC; m++) {
const type = ΔelemBaseTypes[m];
const bytes = typeToBytes[type.name];
const arr = chunk.Δelem(m);
writeTypedValue(dv, off, arr[idx], type);
off += bytes;
}
for (let m = 0; m < elemC; m++) {
const type = elemTypes[m];
const bytes = typeToBytes[type.name];
const arr = chunk.elem(m);
writeTypedValue(dv, off, arr[idx], type);
off += bytes;
}
}
}
return ab;
}
get records () {
const data = [];
for (const record of this) {
data.push(record.slice(1));
}
return data;
}
[Symbol.iterator]() {
const chunks = this.chunks();
let chunkIndex = 0;
let chunkIterator = chunks.length > 0 ? chunks[0][Symbol.iterator]() : null;
return {
next() {
if (!chunkIterator) {
return { done: true };
}
let result = chunkIterator.next();
while (result.done && chunkIndex < chunks.length - 1) {
chunkIndex++;
chunkIterator = chunks[chunkIndex][Symbol.iterator]();
result = chunkIterator.next();
}
return result;
}
};
}
}
class DougalBinaryChunkSequential extends ArrayBuffer {
constructor (buffer, offset, length) {
super(length);
new Uint8Array(this).set(new Uint8Array(buffer, offset, length));
this._ΔelemCaches = new Array(this.ΔelemCount);
this._elemCaches = new Array(this.elemCount);
this._ΔelemBlockOffsets = null;
this._elemBlockOffsets = null;
this._recordOffset = null;
}
_getRecordOffset() {
if (this._recordOffset !== null) return this._recordOffset;
const view = new DataView(this);
const ΔelemC = this.ΔelemCount;
const elemC = this.elemCount;
let recordOffset = 12 + ΔelemC + elemC;
for (let k = 0; k < ΔelemC; k++) {
const tb = view.getUint8(12 + k);
const bc = tb & 0xF;
const bt = codeToType[bc];
recordOffset += typeToBytes[bt.name];
}
while (recordOffset % 4 !== 0) recordOffset++;
this._recordOffset = recordOffset;
return recordOffset;
}
_initBlockOffsets() {
if (this._ΔelemBlockOffsets !== null) return;
const view = new DataView(this);
const count = this.jCount;
const ΔelemC = this.ΔelemCount;
const elemC = this.elemCount;
const recordOffset = this._getRecordOffset();
this._ΔelemBlockOffsets = [];
let o = recordOffset;
for (let k = 0; k < ΔelemC; k++) {
this._ΔelemBlockOffsets[k] = o;
const tb = view.getUint8(12 + k);
const ic = tb >> 4;
const it = codeToType[ic];
o += count * typeToBytes[it.name];
}
this._elemBlockOffsets = [];
for (let k = 0; k < elemC; k++) {
this._elemBlockOffsets[k] = o;
const tc = view.getUint8(12 + ΔelemC + k);
const t = codeToType[tc];
o += count * typeToBytes[t.name];
}
}
/** Return the user-defined value
*/
get udv () {
return new DataView(this).getUint8(1);
}
/** Return the number of j elements in this chunk
*/
get jCount () {
return new DataView(this).getUint16(2, true);
}
/** Return the i value in this chunk
*/
get i () {
return new DataView(this).getUint16(4, true);
}
/** Return the j0 value in this chunk
*/
get j0 () {
return new DataView(this).getUint16(6, true);
}
/** Return the Δj value in this chunk
*/
get Δj () {
return new DataView(this).getInt16(8, true);
}
/** Return the Δelem_count value in this chunk
*/
get ΔelemCount () {
return new DataView(this).getUint8(10);
}
/** Return the elem_count value in this chunk
*/
get elemCount () {
return new DataView(this).getUint8(11);
}
/** Return a TypedArray (e.g., Uint16Array, …) for the n-th Δelem in the chunk
*/
Δelem (n) {
if (this._ΔelemCaches[n]) return this._ΔelemCaches[n];
if (n < 0 || n >= this.ΔelemCount) throw new Error(`Invalid Δelem index: ${n}`);
const view = new DataView(this);
const count = this.jCount;
const ΔelemC = this.ΔelemCount;
const typeByte = view.getUint8(12 + n);
const baseCode = typeByte & 0xF;
const incrCode = typeByte >> 4;
const baseType = codeToType[baseCode];
const incrType = codeToType[incrCode];
if (!baseType || !incrType) throw new Error('Invalid type codes for Δelem');
// Find offset for initial value of this Δelem
let initialOffset = 12 + ΔelemC + this.elemCount;
for (let k = 0; k < n; k++) {
const tb = view.getUint8(12 + k);
const bc = tb & 0xF;
const bt = codeToType[bc];
initialOffset += typeToBytes[bt.name];
}
let current = readTypedValue(view, initialOffset, baseType);
// Advance to start of record data (after all initials and pad)
const recordOffset = this._getRecordOffset();
// Find offset for deltas of this Δelem (skip previous Δelems' delta blocks)
this._initBlockOffsets();
const deltaOffset = this._ΔelemBlockOffsets[n];
// Reconstruct the array
const arr = new baseType(count);
const isBigInt = baseType === BigInt64Array || baseType === BigUint64Array;
arr[0] = current;
for (let idx = 1; idx < count; idx++) {
let delta = readTypedValue(view, deltaOffset + idx * typeToBytes[incrType.name], incrType);
if (isBigInt) {
delta = BigInt(delta);
current += delta;
} else {
current += delta;
}
arr[idx] = current;
}
this._ΔelemCaches[n] = arr;
return arr;
}
/** Return a TypedArray (e.g., Uint16Array, …) for the n-th elem in the chunk
*/
elem (n) {
if (this._elemCaches[n]) return this._elemCaches[n];
if (n < 0 || n >= this.elemCount) throw new Error(`Invalid elem index: ${n}`);
const view = new DataView(this);
const count = this.jCount;
const ΔelemC = this.ΔelemCount;
const elemC = this.elemCount;
const typeCode = view.getUint8(12 + ΔelemC + n);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid type code for elem');
// Find offset for this elem's data block
this._initBlockOffsets();
const elemOffset = this._elemBlockOffsets[n];
// Create and populate the array
const arr = new type(count);
const bytes = typeToBytes[type.name];
for (let idx = 0; idx < count; idx++) {
arr[idx] = readTypedValue(view, elemOffset + idx * bytes, type);
}
this._elemCaches[n] = arr;
return arr;
}
getRecord (index) {
if (index < 0 || index >= this.jCount) throw new Error(`Invalid record index: ${index}`);
const arr = [this.udv, this.i, this.j0 + index * this.Δj];
for (let m = 0; m < this.ΔelemCount; m++) {
const values = this.Δelem(m);
arr.push(values[index]);
}
for (let m = 0; m < this.elemCount; m++) {
const values = this.elem(m);
arr.push(values[index]);
}
return arr;
}
[Symbol.iterator]() {
let index = 0;
const chunk = this;
return {
next() {
if (index < chunk.jCount) {
return { value: chunk.getRecord(index++), done: false };
} else {
return { done: true };
}
}
};
}
}
class DougalBinaryChunkInterleaved extends ArrayBuffer {
constructor(buffer, offset, length) {
super(length);
new Uint8Array(this).set(new Uint8Array(buffer, offset, length));
this._incrStrides = [];
this._elemStrides = [];
this._incrOffsets = [];
this._elemOffsets = [];
this._recordStride = 0;
this._recordOffset = null;
this._initStrides();
this._ΔelemCaches = new Array(this.ΔelemCount);
this._elemCaches = new Array(this.elemCount);
}
_getRecordOffset() {
if (this._recordOffset !== null) return this._recordOffset;
const view = new DataView(this);
const ΔelemC = this.ΔelemCount;
const elemC = this.elemCount;
let recordOffset = 12 + ΔelemC + elemC;
for (let k = 0; k < ΔelemC; k++) {
const tb = view.getUint8(12 + k);
const bc = tb & 0xF;
const bt = codeToType[bc];
recordOffset += typeToBytes[bt.name];
}
while (recordOffset % 4 !== 0) recordOffset++;
this._recordOffset = recordOffset;
return recordOffset;
}
_initStrides() {
const view = new DataView(this);
const ΔelemC = this.ΔelemCount;
const elemC = this.elemCount;
// Compute incr strides and offsets
let incrOffset = 0;
for (let k = 0; k < ΔelemC; k++) {
const typeByte = view.getUint8(12 + k);
const incrCode = typeByte >> 4;
const incrType = codeToType[incrCode];
if (!incrType) throw new Error('Invalid incr type code');
this._incrOffsets.push(incrOffset);
const bytes = typeToBytes[incrType.name];
this._incrStrides.push(bytes);
incrOffset += bytes;
this._recordStride += bytes;
}
// Compute elem strides and offsets
let elemOffset = incrOffset;
for (let k = 0; k < elemC; k++) {
const typeCode = view.getUint8(12 + ΔelemC + k);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid elem type code');
this._elemOffsets.push(elemOffset);
const bytes = typeToBytes[type.name];
this._elemStrides.push(bytes);
elemOffset += bytes;
this._recordStride += bytes;
}
}
get udv() {
return new DataView(this).getUint8(1);
}
get jCount() {
return new DataView(this).getUint16(2, true);
}
get i() {
return new DataView(this).getUint16(4, true);
}
get j0() {
return new DataView(this).getUint16(6, true);
}
get Δj() {
return new DataView(this).getInt16(8, true);
}
get ΔelemCount() {
return new DataView(this).getUint8(10);
}
get elemCount() {
return new DataView(this).getUint8(11);
}
Δelem(n) {
if (this._ΔelemCaches[n]) return this._ΔelemCaches[n];
if (n < 0 || n >= this.ΔelemCount) throw new Error(`Invalid Δelem index: ${n}`);
const view = new DataView(this);
const count = this.jCount;
const ΔelemC = this.ΔelemCount;
const typeByte = view.getUint8(12 + n);
const baseCode = typeByte & 0xF;
const incrCode = typeByte >> 4;
const baseType = codeToType[baseCode];
const incrType = codeToType[incrCode];
if (!baseType || !incrType) throw new Error('Invalid type codes for Δelem');
// Find offset for initial value of this Δelem
let initialOffset = 12 + ΔelemC + this.elemCount;
for (let k = 0; k < n; k++) {
const tb = view.getUint8(12 + k);
const bc = tb & 0xF;
const bt = codeToType[bc];
initialOffset += typeToBytes[bt.name];
}
let current = readTypedValue(view, initialOffset, baseType);
// Find offset to start of record data
const recordOffset = this._getRecordOffset();
// Use precomputed offset for this Δelem
const deltaOffset = recordOffset + this._incrOffsets[n];
// Reconstruct the array
const arr = new baseType(count);
const isBigInt = baseType === BigInt64Array || baseType === BigUint64Array;
arr[0] = current;
for (let idx = 1; idx < count; idx++) {
let delta = readTypedValue(view, deltaOffset + idx * this._recordStride, incrType);
if (isBigInt) {
delta = BigInt(delta);
current += delta;
} else {
current += delta;
}
arr[idx] = current;
}
this._ΔelemCaches[n] = arr;
return arr;
}
elem(n) {
if (this._elemCaches[n]) return this._elemCaches[n];
if (n < 0 || n >= this.elemCount) throw new Error(`Invalid elem index: ${n}`);
const view = new DataView(this);
const count = this.jCount;
const ΔelemC = this.ΔelemCount;
const typeCode = view.getUint8(12 + ΔelemC + n);
const type = codeToType[typeCode];
if (!type) throw new Error('Invalid type code for elem');
// Find offset to start of record data
const recordOffset = this._getRecordOffset();
// Use precomputed offset for this elem (relative to start of record data)
const elemOffset = recordOffset + this._elemOffsets[n];
// Create and populate the array
const arr = new type(count);
const bytes = typeToBytes[type.name];
for (let idx = 0; idx < count; idx++) {
arr[idx] = readTypedValue(view, elemOffset + idx * this._recordStride, type);
}
this._elemCaches[n] = arr;
return arr;
}
getRecord (index) {
if (index < 0 || index >= this.jCount) throw new Error(`Invalid record index: ${index}`);
const arr = [this.udv, this.i, this.j0 + index * this.Δj];
for (let m = 0; m < this.ΔelemCount; m++) {
const values = this.Δelem(m);
arr.push(values[index]);
}
for (let m = 0; m < this.elemCount; m++) {
const values = this.elem(m);
arr.push(values[index]);
}
return arr;
}
[Symbol.iterator]() {
let index = 0;
const chunk = this;
return {
next() {
if (index < chunk.jCount) {
return { value: chunk.getRecord(index++), done: false };
} else {
return { done: true };
}
}
};
}
}
module.exports = { DougalBinaryBundle, DougalBinaryChunkSequential, DougalBinaryChunkInterleaved }

View File

@@ -0,0 +1,327 @@
const codeToType = {
0: Int8Array,
1: Uint8Array,
2: Int16Array,
3: Uint16Array,
4: Int32Array,
5: Uint32Array,
7: Float32Array,
8: Float64Array,
9: BigInt64Array,
10: BigUint64Array
};
const typeToBytes = {
Int8Array: 1,
Uint8Array: 1,
Int16Array: 2,
Uint16Array: 2,
Int32Array: 4,
Uint32Array: 4,
Float32Array: 4,
Float64Array: 8,
BigInt64Array: 8,
BigUint64Array: 8
};
function sequential(binary) {
if (!(binary instanceof Uint8Array) || binary.length < 4) {
throw new Error('Invalid binary input');
}
const view = new DataView(binary.buffer, binary.byteOffset, binary.byteLength);
let offset = 0;
// Initialize result (assuming single i value for simplicity; extend for multiple i values if needed)
const result = { i: null, j: [], Δelems: [], elems: [] };
// Process bundles
while (offset < binary.length) {
// Read bundle header
if (offset + 4 > binary.length) throw new Error('Incomplete bundle header');
const bundleHeader = view.getUint32(offset, true);
if ((bundleHeader & 0xFF) !== 0x1C) throw new Error('Invalid bundle marker');
const bundleLength = bundleHeader >> 8;
offset += 4;
const bundleEnd = offset + bundleLength;
if (bundleEnd > binary.length) throw new Error('Bundle length exceeds input size');
// Process chunks in bundle
while (offset < bundleEnd) {
// Read chunk header
if (offset + 12 > bundleEnd) throw new Error('Incomplete chunk header');
const chunkType = view.getUint8(offset);
if (chunkType !== 0x11) throw new Error(`Unsupported chunk type: ${chunkType}`);
offset += 1; // Skip udv
offset += 1;
const count = view.getUint16(offset, true); offset += 2;
if (count > 65535) throw new Error('Chunk count exceeds 65535');
const iValue = view.getUint16(offset, true); offset += 2;
const j0 = view.getUint16(offset, true); offset += 2;
const Δj = view.getInt16(offset, true); offset += 2;
const ΔelemCount = view.getUint8(offset++); // Δelem_count
const elemCount = view.getUint8(offset++); // elem_count
// Set i value (assuming all chunks share the same i)
if (result.i === null) result.i = iValue;
else if (result.i !== iValue) throw new Error('Multiple i values not supported');
// Read preface (element types)
const ΔelemTypes = [];
for (let i = 0; i < ΔelemCount; i++) {
if (offset >= bundleEnd) throw new Error('Incomplete Δelem types');
const typeByte = view.getUint8(offset++);
const baseCode = typeByte & 0x0F;
const incrCode = typeByte >> 4;
if (!codeToType[baseCode] || !codeToType[incrCode]) {
throw new Error(`Invalid type code in Δelem: ${typeByte}`);
}
ΔelemTypes.push({ baseType: codeToType[baseCode], incrType: codeToType[incrCode] });
}
const elemTypes = [];
for (let i = 0; i < elemCount; i++) {
if (offset >= bundleEnd) throw new Error('Incomplete elem types');
const typeCode = view.getUint8(offset++);
if (!codeToType[typeCode]) throw new Error(`Invalid type code in elem: ${typeCode}`);
elemTypes.push(codeToType[typeCode]);
}
// Initialize Δelems and elems arrays if first chunk
if (!result.Δelems.length && ΔelemCount > 0) {
result.Δelems = Array(ΔelemCount).fill().map(() => []);
}
if (!result.elems.length && elemCount > 0) {
result.elems = Array(elemCount).fill().map(() => []);
}
// Read initial values for Δelems
const initialValues = [];
for (const { baseType } of ΔelemTypes) {
if (offset + typeToBytes[baseType.name] > bundleEnd) {
throw new Error('Incomplete initial values');
}
initialValues.push(readTypedValue(view, offset, baseType));
offset += typeToBytes[baseType.name];
}
// Skip padding
while (offset % 4 !== 0) {
if (offset >= bundleEnd) throw new Error('Incomplete padding after initial values');
offset++;
}
// Reconstruct j values
for (let idx = 0; idx < count; idx++) {
result.j.push(j0 + idx * Δj);
}
// Read record data (non-interleaved)
for (let i = 0; i < ΔelemCount; i++) {
let current = initialValues[i];
const values = result.Δelems[i];
const incrType = ΔelemTypes[i].incrType;
const isBigInt = typeof current === 'bigint';
for (let idx = 0; idx < count; idx++) {
if (offset + typeToBytes[incrType.name] > bundleEnd) {
throw new Error('Incomplete Δelem data');
}
let delta = readTypedValue(view, offset, incrType);
if (idx === 0) {
values.push(isBigInt ? Number(current) : current);
} else {
if (isBigInt) {
delta = BigInt(delta);
current += delta;
values.push(Number(current));
} else {
current += delta;
values.push(current);
}
}
offset += typeToBytes[incrType.name];
}
}
for (let i = 0; i < elemCount; i++) {
const values = result.elems[i];
const type = elemTypes[i];
const isBigInt = type === BigInt64Array || type === BigUint64Array;
for (let idx = 0; idx < count; idx++) {
if (offset + typeToBytes[type.name] > bundleEnd) {
throw new Error('Incomplete elem data');
}
let value = readTypedValue(view, offset, type);
values.push(isBigInt ? Number(value) : value);
offset += typeToBytes[type.name];
}
}
// Skip padding
while (offset % 4 !== 0) {
if (offset >= bundleEnd) throw new Error('Incomplete padding after record data');
offset++;
}
}
}
return result;
}
function interleaved(binary) {
if (!(binary instanceof Uint8Array) || binary.length < 4) {
throw new Error('Invalid binary input');
}
const view = new DataView(binary.buffer, binary.byteOffset, binary.byteLength);
let offset = 0;
// Initialize result (assuming single i value for simplicity; extend for multiple i values if needed)
const result = { i: null, j: [], Δelems: [], elems: [] };
// Process bundles
while (offset < binary.length) {
// Read bundle header
if (offset + 4 > binary.length) throw new Error('Incomplete bundle header');
const bundleHeader = view.getUint32(offset, true);
if ((bundleHeader & 0xFF) !== 0x1C) throw new Error('Invalid bundle marker');
const bundleLength = bundleHeader >> 8;
offset += 4;
const bundleEnd = offset + bundleLength;
if (bundleEnd > binary.length) throw new Error('Bundle length exceeds input size');
// Process chunks in bundle
while (offset < bundleEnd) {
// Read chunk header
if (offset + 12 > bundleEnd) throw new Error('Incomplete chunk header');
const chunkType = view.getUint8(offset);
if (chunkType !== 0x12) throw new Error(`Unsupported chunk type: ${chunkType}`);
offset += 1; // Skip udv
offset += 1;
const count = view.getUint16(offset, true); offset += 2;
if (count > 65535) throw new Error('Chunk count exceeds 65535');
const iValue = view.getUint16(offset, true); offset += 2;
const j0 = view.getUint16(offset, true); offset += 2;
const Δj = view.getInt16(offset, true); offset += 2;
const ΔelemCount = view.getUint8(offset++); // Δelem_count
const elemCount = view.getUint8(offset++); // elem_count
// Set i value (assuming all chunks share the same i)
if (result.i === null) result.i = iValue;
else if (result.i !== iValue) throw new Error('Multiple i values not supported');
// Read preface (element types)
const ΔelemTypes = [];
for (let i = 0; i < ΔelemCount; i++) {
if (offset >= bundleEnd) throw new Error('Incomplete Δelem types');
const typeByte = view.getUint8(offset++);
const baseCode = typeByte & 0x0F;
const incrCode = typeByte >> 4;
if (!codeToType[baseCode] || !codeToType[incrCode]) {
throw new Error(`Invalid type code in Δelem: ${typeByte}`);
}
ΔelemTypes.push({ baseType: codeToType[baseCode], incrType: codeToType[incrCode] });
}
const elemTypes = [];
for (let i = 0; i < elemCount; i++) {
if (offset >= bundleEnd) throw new Error('Incomplete elem types');
const typeCode = view.getUint8(offset++);
if (!codeToType[typeCode]) throw new Error(`Invalid type code in elem: ${typeCode}`);
elemTypes.push(codeToType[typeCode]);
}
// Initialize Δelems and elems arrays if first chunk
if (!result.Δelems.length && ΔelemCount > 0) {
result.Δelems = Array(ΔelemCount).fill().map(() => []);
}
if (!result.elems.length && elemCount > 0) {
result.elems = Array(elemCount).fill().map(() => []);
}
// Read initial values for Δelems
const initialValues = [];
for (const { baseType } of ΔelemTypes) {
if (offset + typeToBytes[baseType.name] > bundleEnd) {
throw new Error('Incomplete initial values');
}
initialValues.push(readTypedValue(view, offset, baseType));
offset += typeToBytes[baseType.name];
}
// Skip padding
while (offset % 4 !== 0) {
if (offset >= bundleEnd) throw new Error('Incomplete padding after initial values');
offset++;
}
// Reconstruct j values
for (let idx = 0; idx < count; idx++) {
result.j.push(j0 + idx * Δj);
}
// Read interleaved record data
for (let idx = 0; idx < count; idx++) {
// Read Δelems
for (let i = 0; i < ΔelemCount; i++) {
const values = result.Δelems[i];
const incrType = ΔelemTypes[i].incrType;
const isBigInt = typeof initialValues[i] === 'bigint';
if (offset + typeToBytes[incrType.name] > bundleEnd) {
throw new Error('Incomplete Δelem data');
}
let delta = readTypedValue(view, offset, incrType);
offset += typeToBytes[incrType.name];
if (idx === 0) {
values.push(isBigInt ? Number(initialValues[i]) : initialValues[i]);
} else {
if (isBigInt) {
delta = BigInt(delta);
initialValues[i] += delta;
values.push(Number(initialValues[i]));
} else {
initialValues[i] += delta;
values.push(initialValues[i]);
}
}
}
// Read elems
for (let i = 0; i < elemCount; i++) {
const values = result.elems[i];
const type = elemTypes[i];
const isBigInt = type === BigInt64Array || type === BigUint64Array;
if (offset + typeToBytes[type.name] > bundleEnd) {
throw new Error('Incomplete elem data');
}
let value = readTypedValue(view, offset, type);
values.push(isBigInt ? Number(value) : value);
offset += typeToBytes[type.name];
}
}
// Skip padding
while (offset % 4 !== 0) {
if (offset >= bundleEnd) throw new Error('Incomplete padding after record data');
offset++;
}
}
}
return result;
}
function readTypedValue(view, offset, type) {
switch (type) {
case Int8Array: return view.getInt8(offset);
case Uint8Array: return view.getUint8(offset);
case Int16Array: return view.getInt16(offset, true);
case Uint16Array: return view.getUint16(offset, true);
case Int32Array: return view.getInt32(offset, true);
case Uint32Array: return view.getUint32(offset, true);
case Float32Array: return view.getFloat32(offset, true);
case Float64Array: return view.getFloat64(offset, true);
case BigInt64Array: return view.getBigInt64(offset, true);
case BigUint64Array: return view.getBigUint64(offset, true);
default: throw new Error(`Unsupported type: ${type.name}`);
}
}
module.exports = { sequential, interleaved };

View File

@@ -0,0 +1,380 @@
const typeToCode = {
Int8Array: 0,
Uint8Array: 1,
Int16Array: 2,
Uint16Array: 3,
Int32Array: 4,
Uint32Array: 5,
Float32Array: 7, // Float16 not natively supported in JS, use Float32
Float64Array: 8,
BigInt64Array: 9,
BigUint64Array: 10
};
const typeToBytes = {
Int8Array: 1,
Uint8Array: 1,
Int16Array: 2,
Uint16Array: 2,
Int32Array: 4,
Uint32Array: 4,
Float32Array: 4,
Float64Array: 8,
BigInt64Array: 8,
BigUint64Array: 8
};
function sequential(json, iGetter, jGetter, Δelems = [], elems = [], udv = 0) {
if (!Array.isArray(json) || !json.length) return new Uint8Array(0);
if (typeof iGetter !== 'function' || typeof jGetter !== 'function') throw new Error('i and j must be getter functions');
Δelems.forEach((elem, idx) => {
if (typeof elem.key !== 'function') throw new Error(`Δelems[${idx}].key must be a getter function`);
});
elems.forEach((elem, idx) => {
if (typeof elem.key !== 'function') throw new Error(`elems[${idx}].key must be a getter function`);
});
// Group records by i value
const groups = new Map();
for (const record of json) {
const iValue = iGetter(record);
if (iValue == null) throw new Error('Missing i value from getter');
if (!groups.has(iValue)) groups.set(iValue, []);
groups.get(iValue).push(record);
}
const maxBundleSize = 0xFFFFFF; // Max bundle length (24 bits)
const buffers = [];
// Process each group (i value)
for (const [iValue, records] of groups) {
// Sort records by j to ensure consistent order
records.sort((a, b) => jGetter(a) - jGetter(b));
const jValues = records.map(jGetter);
if (jValues.some(v => v == null)) throw new Error('Missing j value from getter');
// Split records into chunks based on Δj continuity
const chunks = [];
let currentChunk = [records[0]];
let currentJ0 = jValues[0];
let currentΔj = records.length > 1 ? jValues[1] - jValues[0] : 0;
for (let idx = 1; idx < records.length; idx++) {
const chunkIndex = chunks.reduce((sum, c) => sum + c.records.length, 0);
const expectedJ = currentJ0 + (idx - chunkIndex) * currentΔj;
if (jValues[idx] !== expectedJ || idx - chunkIndex >= 65536) {
chunks.push({ records: currentChunk, j0: currentJ0, Δj: currentΔj });
currentChunk = [records[idx]];
currentJ0 = jValues[idx];
currentΔj = idx + 1 < records.length ? jValues[idx + 1] - jValues[idx] : 0;
} else {
currentChunk.push(records[idx]);
}
}
if (currentChunk.length > 0) {
chunks.push({ records: currentChunk, j0: currentJ0, Δj: currentΔj });
}
// Calculate total size for all chunks in this group by simulating offsets
const chunkSizes = chunks.map(({ records: chunkRecords }) => {
if (chunkRecords.length > 65535) throw new Error(`Chunk size exceeds 65535 for i=${iValue}`);
let simulatedOffset = 0; // Relative to chunk start
simulatedOffset += 12; // Header
simulatedOffset += Δelems.length + elems.length; // Preface
simulatedOffset += Δelems.reduce((sum, e) => sum + typeToBytes[e.baseType.name], 0); // Initial values
while (simulatedOffset % 4 !== 0) simulatedOffset++; // Pad after initial
simulatedOffset += chunkRecords.length * (
Δelems.reduce((sum, e) => sum + typeToBytes[e.incrType.name], 0) +
elems.reduce((sum, e) => sum + typeToBytes[e.type.name], 0)
); // Record data
while (simulatedOffset % 4 !== 0) simulatedOffset++; // Pad after record
return simulatedOffset;
});
const totalChunkSize = chunkSizes.reduce((sum, size) => sum + size, 0);
// Start a new bundle if needed
const lastBundle = buffers[buffers.length - 1];
if (!lastBundle || lastBundle.offset + totalChunkSize > maxBundleSize) {
buffers.push({ offset: 4, buffer: null, view: null });
}
// Initialize DataView for current bundle
const currentBundle = buffers[buffers.length - 1];
if (!currentBundle.buffer) {
const requiredSize = totalChunkSize + 4;
currentBundle.buffer = new ArrayBuffer(requiredSize);
currentBundle.view = new DataView(currentBundle.buffer);
}
// Process each chunk
for (const { records: chunkRecords, j0, Δj } of chunks) {
const chunkSize = chunkSizes.shift();
// Ensure buffer is large enough
if (currentBundle.offset + chunkSize > currentBundle.buffer.byteLength) {
const newSize = currentBundle.offset + chunkSize;
const newBuffer = new ArrayBuffer(newSize);
new Uint8Array(newBuffer).set(new Uint8Array(currentBundle.buffer));
currentBundle.buffer = newBuffer;
currentBundle.view = new DataView(newBuffer);
}
// Write chunk header
let offset = currentBundle.offset;
currentBundle.view.setUint8(offset++, 0x11); // Chunk type
currentBundle.view.setUint8(offset++, udv); // udv
currentBundle.view.setUint16(offset, chunkRecords.length, true); offset += 2; // count
currentBundle.view.setUint16(offset, iValue, true); offset += 2; // i
currentBundle.view.setUint16(offset, j0, true); offset += 2; // j0
currentBundle.view.setInt16(offset, Δj, true); offset += 2; // Δj
currentBundle.view.setUint8(offset++, Δelems.length); // Δelem_count
currentBundle.view.setUint8(offset++, elems.length); // elem_count
// Write chunk preface (element types)
for (const elem of Δelems) {
const baseCode = typeToCode[elem.baseType.name];
const incrCode = typeToCode[elem.incrType.name];
currentBundle.view.setUint8(offset++, (incrCode << 4) | baseCode);
}
for (const elem of elems) {
currentBundle.view.setUint8(offset++, typeToCode[elem.type.name]);
}
// Write initial values for Δelems
for (const elem of Δelems) {
const value = elem.key(chunkRecords[0]);
if (value == null) throw new Error('Missing Δelem value from getter');
writeTypedValue(currentBundle.view, offset, value, elem.baseType);
offset += typeToBytes[elem.baseType.name];
}
// Pad to 4-byte boundary
while (offset % 4 !== 0) currentBundle.view.setUint8(offset++, 0);
// Write record data (non-interleaved)
for (const elem of Δelems) {
let prev = elem.key(chunkRecords[0]);
for (let idx = 0; idx < chunkRecords.length; idx++) {
const value = idx === 0 ? 0 : elem.key(chunkRecords[idx]) - prev;
writeTypedValue(currentBundle.view, offset, value, elem.incrType);
offset += typeToBytes[elem.incrType.name];
prev = elem.key(chunkRecords[idx]);
}
}
for (const elem of elems) {
for (const record of chunkRecords) {
const value = elem.key(record);
if (value == null) throw new Error('Missing elem value from getter');
writeTypedValue(currentBundle.view, offset, value, elem.type);
offset += typeToBytes[elem.type.name];
}
}
// Pad to 4-byte boundary
while (offset % 4 !== 0) currentBundle.view.setUint8(offset++, 0);
// Update bundle offset
currentBundle.offset = offset;
}
// Update bundle header
currentBundle.view.setUint32(0, 0x1C | ((currentBundle.offset - 4) << 8), true);
}
// Combine buffers into final Uint8Array
const finalLength = buffers.reduce((sum, b) => sum + b.offset, 0);
const result = new Uint8Array(finalLength);
let offset = 0;
for (const { buffer, offset: bundleOffset } of buffers) {
result.set(new Uint8Array(buffer, 0, bundleOffset), offset);
offset += bundleOffset;
}
return result;
}
function interleaved(json, iGetter, jGetter, Δelems = [], elems = [], udv = 0) {
if (!Array.isArray(json) || !json.length) return new Uint8Array(0);
if (typeof iGetter !== 'function' || typeof jGetter !== 'function') throw new Error('i and j must be getter functions');
Δelems.forEach((elem, idx) => {
if (typeof elem.key !== 'function') throw new Error(`Δelems[${idx}].key must be a getter function`);
});
elems.forEach((elem, idx) => {
if (typeof elem.key !== 'function') throw new Error(`elems[${idx}].key must be a getter function`);
});
// Group records by i value
const groups = new Map();
for (const record of json) {
const iValue = iGetter(record);
if (iValue == null) throw new Error('Missing i value from getter');
if (!groups.has(iValue)) groups.set(iValue, []);
groups.get(iValue).push(record);
}
const maxBundleSize = 0xFFFFFF; // Max bundle length (24 bits)
const buffers = [];
// Process each group (i value)
for (const [iValue, records] of groups) {
// Sort records by j to ensure consistent order
records.sort((a, b) => jGetter(a) - jGetter(b));
const jValues = records.map(jGetter);
if (jValues.some(v => v == null)) throw new Error('Missing j value from getter');
// Split records into chunks based on Δj continuity
const chunks = [];
let currentChunk = [records[0]];
let currentJ0 = jValues[0];
let currentΔj = records.length > 1 ? jValues[1] - jValues[0] : 0;
for (let idx = 1; idx < records.length; idx++) {
const chunkIndex = chunks.reduce((sum, c) => sum + c.records.length, 0);
const expectedJ = currentJ0 + (idx - chunkIndex) * currentΔj;
if (jValues[idx] !== expectedJ || idx - chunkIndex >= 65536) {
chunks.push({ records: currentChunk, j0: currentJ0, Δj: currentΔj });
currentChunk = [records[idx]];
currentJ0 = jValues[idx];
currentΔj = idx + 1 < records.length ? jValues[idx + 1] - jValues[idx] : 0;
} else {
currentChunk.push(records[idx]);
}
}
if (currentChunk.length > 0) {
chunks.push({ records: currentChunk, j0: currentJ0, Δj: currentΔj });
}
// Calculate total size for all chunks in this group by simulating offsets
const chunkSizes = chunks.map(({ records: chunkRecords }) => {
if (chunkRecords.length > 65535) throw new Error(`Chunk size exceeds 65535 for i=${iValue}`);
let simulatedOffset = 0; // Relative to chunk start
simulatedOffset += 12; // Header
simulatedOffset += Δelems.length + elems.length; // Preface
simulatedOffset += Δelems.reduce((sum, e) => sum + typeToBytes[e.baseType.name], 0); // Initial values
while (simulatedOffset % 4 !== 0) simulatedOffset++; // Pad after initial
simulatedOffset += chunkRecords.length * (
Δelems.reduce((sum, e) => sum + typeToBytes[e.incrType.name], 0) +
elems.reduce((sum, e) => sum + typeToBytes[e.type.name], 0)
); // Interleaved record data
while (simulatedOffset % 4 !== 0) simulatedOffset++; // Pad after record
return simulatedOffset;
});
const totalChunkSize = chunkSizes.reduce((sum, size) => sum + size, 0);
// Start a new bundle if needed
const lastBundle = buffers[buffers.length - 1];
if (!lastBundle || lastBundle.offset + totalChunkSize > maxBundleSize) {
buffers.push({ offset: 4, buffer: null, view: null });
}
// Initialize DataView for current bundle
const currentBundle = buffers[buffers.length - 1];
if (!currentBundle.buffer) {
const requiredSize = totalChunkSize + 4;
currentBundle.buffer = new ArrayBuffer(requiredSize);
currentBundle.view = new DataView(currentBundle.buffer);
}
// Process each chunk
for (const { records: chunkRecords, j0, Δj } of chunks) {
const chunkSize = chunkSizes.shift();
// Ensure buffer is large enough
if (currentBundle.offset + chunkSize > currentBundle.buffer.byteLength) {
const newSize = currentBundle.offset + chunkSize;
const newBuffer = new ArrayBuffer(newSize);
new Uint8Array(newBuffer).set(new Uint8Array(currentBundle.buffer));
currentBundle.buffer = newBuffer;
currentBundle.view = new DataView(newBuffer);
}
// Write chunk header
let offset = currentBundle.offset;
currentBundle.view.setUint8(offset++, 0x12); // Chunk type
currentBundle.view.setUint8(offset++, udv); // udv
currentBundle.view.setUint16(offset, chunkRecords.length, true); offset += 2; // count
currentBundle.view.setUint16(offset, iValue, true); offset += 2; // i
currentBundle.view.setUint16(offset, j0, true); offset += 2; // j0
currentBundle.view.setInt16(offset, Δj, true); offset += 2; // Δj
currentBundle.view.setUint8(offset++, Δelems.length); // Δelem_count
currentBundle.view.setUint8(offset++, elems.length); // elem_count
// Write chunk preface (element types)
for (const elem of Δelems) {
const baseCode = typeToCode[elem.baseType.name];
const incrCode = typeToCode[elem.incrType.name];
currentBundle.view.setUint8(offset++, (incrCode << 4) | baseCode);
}
for (const elem of elems) {
currentBundle.view.setUint8(offset++, typeToCode[elem.type.name]);
}
// Write initial values for Δelems
for (const elem of Δelems) {
const value = elem.key(chunkRecords[0]);
if (value == null) throw new Error('Missing Δelem value from getter');
writeTypedValue(currentBundle.view, offset, value, elem.baseType);
offset += typeToBytes[elem.baseType.name];
}
// Pad to 4-byte boundary
while (offset % 4 !== 0) currentBundle.view.setUint8(offset++, 0);
// Write interleaved record data
const prevValues = Δelems.map(elem => elem.key(chunkRecords[0]));
for (let idx = 0; idx < chunkRecords.length; idx++) {
// Write Δelems increments
for (let i = 0; i < Δelems.length; i++) {
const elem = Δelems[i];
const value = idx === 0 ? 0 : elem.key(chunkRecords[idx]) - prevValues[i];
writeTypedValue(currentBundle.view, offset, value, elem.incrType);
offset += typeToBytes[elem.incrType.name];
prevValues[i] = elem.key(chunkRecords[idx]);
}
// Write elems
for (const elem of elems) {
const value = elem.key(chunkRecords[idx]);
if (value == null) throw new Error('Missing elem value from getter');
writeTypedValue(currentBundle.view, offset, value, elem.type);
offset += typeToBytes[elem.type.name];
}
}
// Pad to 4-byte boundary
while (offset % 4 !== 0) currentBundle.view.setUint8(offset++, 0);
// Update bundle offset
currentBundle.offset = offset;
}
// Update bundle header
currentBundle.view.setUint32(0, 0x1C | ((currentBundle.offset - 4) << 8), true);
}
// Combine buffers into final Uint8Array
const finalLength = buffers.reduce((sum, b) => sum + b.offset, 0);
const result = new Uint8Array(finalLength);
let offset = 0;
for (const { buffer, offset: bundleOffset } of buffers) {
result.set(new Uint8Array(buffer, 0, bundleOffset), offset);
offset += bundleOffset;
}
return result;
}
function writeTypedValue(view, offset, value, type) {
switch (type) {
case Int8Array: view.setInt8(offset, value); break;
case Uint8Array: view.setUint8(offset, value); break;
case Int16Array: view.setInt16(offset, value, true); break;
case Uint16Array: view.setUint16(offset, value, true); break;
case Int32Array: view.setInt32(offset, value, true); break;
case Uint32Array: view.setUint32(offset, value, true); break;
case Float32Array: view.setFloat32(offset, value, true); break;
case Float64Array: view.setFloat64(offset, value, true); break;
case BigInt64Array: view.setBigInt64(offset, BigInt(value), true); break;
case BigUint64Array: view.setBigUint64(offset, BigInt(value), true); break;
default: throw new Error(`Unsupported type: ${type.name}`);
}
}
module.exports = { sequential, interleaved };

View File

@@ -0,0 +1,139 @@
/** Binary encoder
*
* This module encodes scalar data from a grid-like source
* into a packed binary format for bandwidth efficiency and
* speed of access.
*
* Data are indexed by i & j values, with "i" being constant
* (e.g., a sequence or line number) and "j" expected to change
* by a constant, linear amount (e.g., point numbers). All data
* from consecutive "j" values will be encoded as a single array
* (or series of arrays if multiple values are encoded).
* If there is a jump in the "j" progression, a new "chunk" will
* be started with a new array (or series of arrays).
*
* Multiple values may be encoded per (i, j) pair, using any of
* the types supported by JavaScript's TypedArray except for
* Float16 and Uint8Clamped. Each variable can be encoded with
* a different size.
*
* Values may be encoded directly or as deltas from an initial
* value. The latter is particularly efficient when dealing with
* monotonically incrementing data, such as timestamps.
*
* The conceptual packet format for sequentially encoded data
* looks like this:
*
* <msg-type> <count: x> <i> <j0> <Δj>
*
* <Δelement_count: y>
* <element_count: z>
*
* <Δelement_1_type_base> … <Δelement_y_type_base>
* <Δelement_1_type_incr> … <Δelement_y_type_incr>
* <elem_1_type> … <elem_z_type>
*
* <Δelement_1_first> … <Δelement_z_first>
*
* <Δelem_1_0> … <Δelem_1_x>
* …
* <Δelem_y_0> … <Δelem_y_x>
* <elem_1_0> … <elem_1_x>
* …
* <elem_z_0> … <elem_z_x>
*
*
* The conceptual packet format for interleaved encoded data
* looks like this:
*
*
* <msg-type> <count: x> <i> <j0> <Δj>
*
* <Δelement_count: y>
* <element_count: z>
*
* <Δelement_1_type_base> … <Δelement_y_type_base>
* <Δelement_1_type_incr> … <Δelement_y_type_incr>
* <elem_1_type> … <elem_z_type>
*
* <Δelement_1_first> … <Δelement_y_first>
*
* <Δelem_1_0> <Δelem_2_0> … <Δelem_y_0> <elem_1_0> <elem_2_0> … <elem_z_0>
* <Δelem_1_1> <Δelem_2_1> … <Δelem_y_1> <elem_1_1> <elem_2_1> … <elem_z_1>
* …
* <Δelem_1_x> <Δelem_2_x> … <Δelem_y_x> <elem_1_x> <elem_2_x> … <elem_z_x>
*
*
* Usage example:
*
* json = [
* {
* sequence: 7,
* sailline: 5354,
* line: 5356,
* point: 1068,
* tstamp: 1695448704372,
* objrefraw: 3,
* objreffinal: 4
* },
* {
* sequence: 7,
* sailline: 5354,
* line: 5352,
* point: 1070,
* tstamp: 1695448693612,
* objrefraw: 2,
* objreffinal: 3
* },
* {
* sequence: 7,
* sailline: 5354,
* line: 5356,
* point: 1072,
* tstamp: 1695448684624,
* objrefraw: 3,
* objreffinal: 4
* }
* ];
*
* deltas = [
* { key: el => el.tstamp, baseType: BigUint64Array, incrType: Int16Array }
* ];
*
* elems = [
* { key: el => el.objrefraw, type: Uint8Array },
* { key: el => el.objreffinal, type: Uint8Array }
* ];
*
* i = el => el.sequence;
*
* j = el => el.point;
*
* bundle = encode(json, i, j, deltas, elems);
*
* // bundle:
*
* Uint8Array(40) [
* 36, 0, 0, 28, 17, 0, 3, 0, 7, 0,
* 44, 4, 2, 0, 1, 2, 42, 1, 1, 116,
* 37, 158, 192, 138, 1, 0, 0, 0, 0, 0,
* 248, 213, 228, 220, 3, 2, 3, 4, 3, 4
* ]
*
* decode(bundle);
*
* {
* i: 7,
* j: [ 1068, 1070, 1072 ],
* 'Δelems': [ [ 1695448704372, 1695448693612, 1695448684624 ] ],
* elems: [ [ 3, 2, 3 ], [ 4, 3, 4 ] ]
* }
*
*/
module.exports = {
encode: {...require('./encode')},
decode: {...require('./decode')},
...require('./classes')
};

View File

@@ -0,0 +1,12 @@
{
"name": "@dougal/binary",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": ""
}

View File

@@ -0,0 +1,25 @@
class ConcurrencyLimiter {
constructor(maxConcurrent) {
this.maxConcurrent = maxConcurrent;
this.active = 0;
this.queue = [];
}
async enqueue(task) {
if (this.active >= this.maxConcurrent) {
await new Promise(resolve => this.queue.push(resolve));
}
this.active++;
try {
return await task();
} finally {
this.active--;
if (this.queue.length > 0) {
this.queue.shift()();
}
}
}
}
module.exports = ConcurrencyLimiter;

View File

@@ -0,0 +1,12 @@
{
"name": "@dougal/concurrency",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": ""
}

View File

@@ -0,0 +1,75 @@
class Organisation {
constructor (data) {
this.read = !!data?.read;
this.write = !!data?.write;
this.edit = !!data?.edit;
this.other = {};
return new Proxy(this, {
get (target, prop) {
if (prop in target) {
return target[prop]
} else {
return target.other[prop];
}
},
set (target, prop, value) {
const oldValue = target[prop] !== undefined ? target[prop] : target.other[prop];
const newValue = Boolean(value);
if (["read", "write", "edit"].includes(prop)) {
target[prop] = newValue;
} else {
target.other[prop] = newValue;
}
return true;
}
});
}
toJSON () {
return {
read: this.read,
write: this.write,
edit: this.edit,
...this.other
}
}
toString (replacer, space) {
return JSON.stringify(this.toJSON(), replacer, space);
}
/** Limit the operations to only those allowed by `other`
*/
filter (other) {
const filteredOrganisation = new Organisation();
filteredOrganisation.read = this.read && other.read;
filteredOrganisation.write = this.write && other.write;
filteredOrganisation.edit = this.edit && other.edit;
return filteredOrganisation;
}
intersect (other) {
return this.filter(other);
}
}
if (typeof module !== 'undefined' && module.exports) {
module.exports = Organisation; // CJS export
}
// ESM export
if (typeof exports !== 'undefined' && !exports.default) {
exports.default = Organisation; // ESM export
}

View File

@@ -0,0 +1,225 @@
const Organisation = require('./Organisation');
class Organisations {
#values = {}
#overlord
static entries (orgs) {
return orgs.names().map(name => [name, orgs.get(name)]);
}
constructor (data, overlord) {
if (data instanceof Organisations) {
for (const [name, value] of Organisations.entries(data)) {
this.set(name, new Organisation(value));
}
} else if (data instanceof Object) {
for (const [name, value] of Object.entries(data)) {
this.set(name, new Organisation(value));
}
} else if (data instanceof String) {
this.set(data, new Organisation());
} else if (typeof data !== "undefined") {
throw new Error("Invalid constructor argument");
}
if (overlord) {
this.#overlord = overlord;
}
}
get values () {
return this.#values;
}
get length () {
return this.names().length;
}
get overlord () {
return this.#overlord;
}
set overlord (v) {
this.#overlord = new Organisations(v);
}
/** Get the operations for `name`
*/
get (name) {
const key = Object.keys(this.values).find( k => k.toLowerCase() == name.toLowerCase() ) ?? name;
return this.values[key];
}
/** Set the operations for `name` to `value`
*
* If we have an overlord, ensure we cannot:
*
* 1. Add new organisations which the overlord
* is not a member of
* 2. Access operations that the overlord is not
* allowed to access
*/
set (name, value) {
name = String(name).trim();
const key = Object.keys(this.values).find( k => k.toLowerCase() == name.toLowerCase() ) ?? name;
const org = new Organisation(value);
if (this.overlord) {
const parent = this.overlord.get(key) ?? this.overlord.get("*");
if (parent) {
this.values[key] = parent.filter(org);
}
} else {
this.values[key] = new Organisation(value);
}
return this;
}
/** Enable the operation `op` in all organisations
*/
enableOperation (op) {
if (this.overlord) {
Object.keys(this.#values)
.filter( key => (this.overlord.get(key) ?? this.overlord.get("*"))?.[op] )
.forEach( key => this.#values[key][op] = true );
} else {
Object.values(this.#values).forEach( org => org[op] = true );
}
return this;
}
/** Disable the operation `op` in all organisations
*/
disableOperation (op) {
Object.values(this.#values).forEach( org => org[op] = false );
return this;
}
/** Create a new organisation object limited by the caller's rights
*
* The spawned Organisations instance will have the same organisations
* and rights as the caller minus the applied `mask`. With the default
* mask, the spawned object will inherit all rights except for `edit`
* rights.
*
* The "*" organisation must be explicitly assigned. It is not inherited.
*/
spawn (mask = {read: true, write: true, edit: false}) {
const parent = new Organisations();
const wildcard = this.get("*").edit; // If true, we can spawn everywhere
this.entries().forEach( ([k, v]) => {
// if (k != "*") { // This organisation is not inherited
if (v.edit || wildcard) { // We have the right to spawn in this organisation
const o = new Organisation({
read: v.read && mask.read,
write: v.write && mask.write,
edit: v.edit && mask.edit
});
parent.set(k, o);
}
// }
});
return new Organisations({}, parent);
}
remove (name) {
const key = Object.keys(this.values).find( k => k.toLowerCase() == name.toLowerCase() ) ?? name;
delete this.values[key];
}
/** Return the list of organisation names
*/
names () {
return Object.keys(this.values);
}
/** Same as this.get(name)
*/
value (name) {
return this.values[name];
}
/** Same as Object.prototype.entries
*/
entries () {
return this.names().map( name => [ name, this.value(name) ] );
}
/** Return true if the named organisation is present
*/
has (name) {
return Boolean(this.value(name));
}
/** Return only those of our organisations
* and operations present in `other`
*/
filter (other) {
const filteredOrganisations = new Organisations();
const wildcard = other.value("*");
for (const [name, org] of this.entries()) {
const ownOrg = other.value(name) ?? wildcard;
if (ownOrg) {
filteredOrganisations.set(name, org.filter(ownOrg))
}
}
return filteredOrganisations;
}
/** Return only those organisations
* that have access to the required
* operation
*/
accessToOperation (op) {
const filteredOrganisations = new Organisations();
for (const [name, org] of this.entries()) {
if (org[op]) {
filteredOrganisations.set(name, org);
}
}
return filteredOrganisations;
}
toJSON () {
const obj = {};
for (const key in this.values) {
obj[key] = this.values[key].toJSON();
}
return obj;
}
toString (replacer, space) {
return JSON.stringify(this.toJSON(), replacer, space);
}
*[Symbol.iterator] () {
for (const [name, operations] of this.entries()) {
yield {name, operations};
}
}
}
if (typeof module !== 'undefined' && module.exports) {
module.exports = Organisations; // CJS export
}
// ESM export
if (typeof exports !== 'undefined' && !exports.default) {
exports.default = Organisations; // ESM export
}

View File

@@ -0,0 +1,5 @@
module.exports = {
Organisation: require('./Organisation'),
Organisations: require('./Organisations')
}

View File

@@ -0,0 +1,12 @@
{
"name": "@dougal/organisations",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": ""
}

View File

@@ -0,0 +1,364 @@
const EventEmitter = require('events');
const { Organisations } = require('@dougal/organisations');
function randomUUID () {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
const r = Math.random() * 16 | 0;
const v = c === 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
}
class User extends EventEmitter {
// Valid field names
static fields = [ "ip", "host", "name", "email", "description", "colour", "active", "organisations", "meta" ]
static validUUID (str) {
const uuidv4Rx = /^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i;
return uuidv4Rx.test(str);
}
static validIPv4 (str) {
const ipv4Rx = /^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\/([0-9]|[1-2][0-9]|3[0-2]))?$/;
return ipv4Rx.test(str);
}
static validIPv6 (str) {
const ipv6Rx = /^(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?::[0-9a-fA-F]{1,4}){1,6}|:((?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4][0-9]|[01]?[0-9][0-9]?))|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4][0-9]|[01]?[0-9][0-9]?))))$/;
return ipv6Rx.test(str);
}
static validHostname (str) {
const hostnameRx = /^(?=.{1,253}$)(?:(?!-)[A-Za-z0-9-]{1,63}(?<!-)\.)+[A-Za-z]{2,}$/;
return hostnameRx.test(str);
}
#setString (k, v) {
if (typeof v === "undefined") {
this.values[k] = v;
} else {
this.values[k] = String(v).trim();
}
this.emit("changed", k, v);
this.#updateTimestamp();
}
#updateTimestamp (v) {
if (typeof v === "undefined") {
this.#timestamp = (new Date()).valueOf();
} else {
this.#timestamp = (new Date(v)).valueOf();
}
this.emit("last_modified", this.#timestamp);
}
// Create a new instance of `other`, where `other` is
// an instance of User or of a derived class
#clone (other = this) {
const clone = new this.constructor();
Object.assign(clone.values, other.values);
clone.organisations = new Organisations(other.organisations);
return clone;
}
values = {}
#timestamp
constructor (data) {
super();
User.fields.forEach( f => this[f] = data?.[f] );
this.values.id = data?.id ?? randomUUID();
this.values.active = !!this.active;
this.values.hash = data?.hash;
this.values.password = data?.password;
this.values.organisations = new Organisations(data?.organisations);
this.#updateTimestamp(data?.last_modified);
}
/*
* Getters
*/
get id () { return this.values.id }
get ip () { return this.values.ip }
get host () { return this.values.host }
get name () { return this.values.name }
get email () { return this.values.email }
get description () { return this.values.description }
get colour () { return this.values.colour }
get active () { return this.values.active }
get organisations () { return this.values.organisations }
get password () { return this.values.password }
get timestamp () { return new Date(this.#timestamp) }
/*
* Setters
*/
set id (v) {
if (typeof v === "undefined") {
this.values.id = randomUUID();
} else if (User.validUUID(v)) {
this.values.id = v;
} else {
throw new Error("Invalid ID format (must be UUIDv4)");
}
this.emit("changed", "id", this.values.id);
this.#updateTimestamp();
}
set ip (v) {
if (User.validIPv4(v) || User.validIPv6(v) || typeof v === "undefined") {
this.values.ip = v;
} else {
throw new Error("Invalid IP address or subnet");
}
this.emit("changed", "ip", this.values.ip);
this.#updateTimestamp();
}
set host (v) {
if (User.validHostname(v) || typeof v === "undefined") {
this.values.host = v;
} else {
throw new Error("Invalid hostname");
}
this.emit("changed", "host", this.values.host);
this.#updateTimestamp();
}
set name (v) {
this.#setString("name", v);
}
set email (v) {
// TODO should validate, buy hey!
this.#setString("email", v);
}
set description (v) {
this.#setString("description", v);
}
set colour (v) {
this.#setString("colour", v);
}
set active (v) {
this.values.active = !!v;
this.emit("changed", "active", this.values.active);
this.#updateTimestamp();
}
set organisations (v) {
this.values.organisations = new Organisations(v);
this.emit("changed", "organisations", this.values.organisations);
this.#updateTimestamp();
}
set password (v) {
this.values.password = v;
this.emit("changed", "password", this.values.password);
this.#updateTimestamp();
}
/*
* Validation methods
*/
get errors () {
let err = [];
if (!this.id) err.push("ERR_NO_ID");
if (!this.name) err.push("ERR_NO_NAME");
if (!this.organisations.length) err.push("ERR_NO_ORG");
return err;
}
get isValid () {
return this.errors.length == 0;
}
/*
* Filtering methods
*/
filter (other) {
// const filteredUser = new User(this);
const filteredUser = this.#clone();
filteredUser.organisations = this.organisations.filter(other.organisations);
return filteredUser;
}
/** Return users that are visible to me.
*
* These are users with which at leas one common organisation
* with read, write or delete access to.
*
* If we are wildcarded ("*"), we see everyone.
*
* If a peer is wildcarded, they can be seen by everone.
*/
peers (list) {
if (this.organisations.value("*")) {
return list;
} else {
return list.filter( user => this.canRead(user) );
// return list.filter( user =>
// user.organisations.value("*") ||
// user.organisations.filter(this.organisations).length > 0
// this.organisations.filter(user.organisations).length > 0
// );
}
}
/** Return users that I can edit
*
* These users must belong to an organisation
* over which I have edit rights.
*
* If we are edit wildcarded, we can edit everyone.
*/
editablePeers (list) {
const editableOrgs = this.organisations.accessToOperation("edit");
if (editableOrgs.value("*")) {
return list;
} else {
return list.filter( user => this.canEdit(user) );
// editableOrgs.filter(user.organisations).length > 0
// );
}
}
/*
* General methods
*/
/** Return `true` if we are `other`
*/
is (other) {
return this.id == other.id;
}
canDo (operation, other) {
if (this.organisations.get('*')?.[operation])
return true;
if (other instanceof User) {
return other.organisations.names().some(name => this.organisations.get(name)?.[operation]);
} else if (other instanceof Organisations) {
return other.accessToOperation(operation).names().some(name => this.organisations.get(name)?.[operation]);
} else if (other?.organisations) {
return this.canDo(operation, new Organisations(other.organisations));
} else if (other instanceof Object) {
return this.canDo(operation, new Organisations(other));
}
return false;
}
canRead (other) {
return this.canDo("read", other);
}
canWrite (other) {
return this.canDo("write", other);
}
canEdit (other) {
return this.canDo("edit", other);
}
/** Perform an edit on another user
*
* Syntax: user.edit(other).to(another);
*
* Applies to `other` the changes described in `another`
* that are permitted to `user`. The argument `another`
* must be a plain object (not a `User` instance) with
* only the properties that are to be changed.
*
* NOTE: Organisations are not merged, they are overwritten
* and then filtered to ensure that the edited user does not
* gain more privileges than those granted to the editing
* user.
*
* Example:
*
* // This causes user test77 to set user x23 to
* // inactive
* test77.edit(x23).to({active: false})
*/
edit (other) {
if (this.canEdit(other)) {
return {
to: (another) => {
const newUser = Object.assign(this.#clone(other), another);
return newUser.filter(this);
}
}
}
// Do not fail or throw but return undefined
}
/** Create a new user similar to us except it doesn't have `edit` rights
* by default
*/
spawn (init = {}, mask = {read: true, write: true, edit: false}) {
// const user = new User(init);
const user = this.#clone(init);
user.organisations = this.organisations.accessToOperation("edit").disableOperation("edit");
user.organisations.overlord = this.organisations;
return user;
}
/*
* Conversion and presentation methods
*/
toJSON () {
return {
id: this.id,
ip: this.ip,
host: this.host,
name: this.name,
email: this.email,
description: this.description,
colour: this.colour,
active: this.active,
organisations: this.organisations.toJSON(),
password: this.password
}
}
toString (replacer, space) {
return JSON.stringify(this.toJSON(), replacer, space);
}
}
if (typeof module !== 'undefined' && module.exports) {
module.exports = User; // CJS export
}
// ESM export
if (typeof exports !== 'undefined' && !exports.default) {
exports.default = User; // ESM export
}

View File

@@ -0,0 +1,4 @@
module.exports = {
User: require('./User')
}

Some files were not shown because too many files have changed in this diff Show More