Fix/sqlite delete journal mode lustre#367
Open
rhaegar325 wants to merge 4 commits intomainfrom
Open
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #367 +/- ##
=======================================
+ Coverage 70.7% 71.0% +0.3%
=======================================
Files 28 28
Lines 4686 4704 +18
Branches 849 853 +4
=======================================
+ Hits 3312 3340 +28
+ Misses 1169 1158 -11
- Partials 205 206 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Solve #274
Fix: SQLite Concurrent Write Crashes on Lustre (Gadi)
Problem
Batch CMORisation on Gadi failed with two classes of SQLite errors when multiple PBS jobs
accessed the shared tracking database concurrently.
1. SIGBUS (Bus error, signal 7) in
walIndexReadHdr()WAL mode creates a
.db-shmfile accessed viammap()to share the WAL index acrossprocesses. On Lustre (
/scratch,/g/data),mmap()cache coherency is not guaranteedacross compute nodes, causing the mapped memory to reference a nonexistent physical address.
2.
disk I/O error(SQLITE_IOERR)With 71 PBS jobs starting simultaneously, Lustre's Metadata Server (MDS) intermittently
returns
EIOunder high concurrent load.fsync()calls on journal files as well asdirect
read()/write()syscalls to the database file can fail transiently.Changes
src/access_moppy/tracking.pySQLite configuration (
_init_db)journal_modeWALDELETE.db-shmmmap — safe on LustresynchronousNORMALOFFfsync()calls; removes the primary EIO sourcebusy_timeout30000ms (moved first)connect timeout30swal_checkpoint(TRUNCATE)DELETE + synchronous=OFFrationale: DELETE mode writes a rollback journal via the OSpage cache (no
fsync()), which survives process crashes for automatic recovery.synchronous=OFFeliminates allfsync()calls.pwrite()to the journal goes throughthe kernel page cache and does not trigger EIO; only
fsync()does.Retry logic (
_execute_with_retry)Extended the retry condition from
"database is locked"only to also cover"disk I/O error". Lustre MDS EIO errors are transient; exponential backoff retries(1 s, 2 s, 4 s, 8 s, 16 s) succeed once the metadata server recovers.
All read and write methods now route through
_execute_with_retryPreviously,
get_status()andis_done()calledcursor.execute()directly with noretry protection. These are the first DB calls made by each PBS job at startup — the
highest-risk window for concurrent EIO failures. Routing them through
_execute_with_retrycloses this gap.
is_done()is simplified to delegate toget_status(), removingduplicate query logic.
tests/unit/test_tracking.pytest_no_shm_or_wal_files_on_disk: asserts that no.db-shmor.db-walfilesare created after writes (WAL mode absent).
test_journal_mode_is_delete: assertsPRAGMA journal_modereturnsdelete.Test Results
Five rounds of 71 concurrent PBS jobs on Gadi:
disk I/O errorTrade-offs
synchronous=OFF: the OS page cache is not forced to disk after each commit. If aPBS job is killed mid-write (e.g. walltime exceeded during the <10 ms write window),
the tracking database may be inconsistent. The tracking DB is non-critical status
metadata; it can be deleted and repopulated without affecting CMORised output files.
write locks, unlike WAL's concurrent-reader model. In MOPPy's workload each job writes
twice in under 10 ms across hours of processing, so this has no measurable impact on
throughput.