From 7a069d1f42e172290f952f488d17effe188bdc56 Mon Sep 17 00:00:00 2001 From: Cutieguwu Date: Sat, 17 Jan 2026 23:40:19 -0500 Subject: [PATCH] Update README. --- README.adoc | 41 ----------------------------- README.md | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+), 41 deletions(-) delete mode 100644 README.adoc create mode 100644 README.md diff --git a/README.adoc b/README.adoc deleted file mode 100644 index 9296655..0000000 --- a/README.adoc +++ /dev/null @@ -1,41 +0,0 @@ -= kramer -:toc: - -// Hello people reading the README source :) - -== Prelude - -VERY EARLY ALPHA -- NOT YET FUNCTIONAL - -I needed a program to efficiently repair the data on optical discs. - -== Goals - -* [*] CLI Args -** [*] Input device -** [*] Output file (ISO 9660) -** [*] Repair map file -** [*] sequence_length -** [*] brute_passes -** [*] Sector size override? - -* Repair Algorithm -** Stage 1: Trial -*** [ ] 1 - From first sector, parse forward to error. -*** [ ] 2 - From last sector, parse backwards to error. -*** [ ] 3 - From center of data for trial, parse forward to error or end of remaining trial domain. -*** [ ] 4 - Stripe-skip remaining data, attempting to read largest trial domains first. -**** [ ] If data keeps reading good, no skip will occur until an error is reached. -** Stage 2: Isolation -*** [ ] From largest to smallest untrustworthy sequence, attempt to read each sequence at half sequence_length. -*** [ ] Same, but at quarter sequence_length. -*** [ ] Same, but at eighth sequence_length. -*** [ ] By sector, parse untrustworthy sequences from start to error, and end to error. Mark mid section for brute force. -** Stage 3: Brute Force -*** [ ] Desperately attempt to recover data from marked sections. -*** [ ] Attempt for brute_passes, retrying all failed sectors. - -* [ ] Repair Map -** [ ] I'll figure out some kind of language for this... - -== License \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..796c3da --- /dev/null +++ b/README.md @@ -0,0 +1,74 @@ +# kramer + +`kramer` is a data recovery utility for optical media. + +There are plans to change the project name, but during initial development it +will continue to be referred to as kramer. + +This is still in very early development, so expect old maps to no longer work. + +## Plans + +### Core + +- [x] Mapping + - [x] Record the state of disc regions + - [x] Recover from saved map. +- [ ] Recovery + - [x] Initial + Technically there is an outstanding issue with sleepy firmware here, + but beside that this technically works. + - [ ] Patchworking + - [ ] Isolate + - [ ] Scraping +- [ ] CLI + - [x] Arguments + - [ ] Recovery progress + - [ ] Recovery stats +- [ ] Documentation, eugh. + +### Extra + +- [ ] i18n + - [ ] English + - [ ] French +- [ ] TUI (akin to `ddrescueview`) + - [ ] Visual status map + - [ ] Recovery properties + - [ ] Recovery progress + - [ ] Recovery stats + + +## Recovery Strategy + +### Initial Pass (Stage::Untested) + +Tries to read clusters of `max_buffer_size`, marking clusters with errors as +`ForIsolation` (note that the name has not yet be updated to +`Patchwork{ depth }`). + +### Patchworking + +This works by halving the length of the read buffer until one of two +conditions is met: + +1. `max_buffer_size` has been divided by `max_buffer_subdivision` + (like a maximum recursion depth). + Effectively, it will keep running `max_buffer_size / isolation_depth; + isolation_depth++;` on each pass until `isolation_depth == + max_buffer_subdivision` +2. `buffer_size <= min_buffer_size` +3. `buffer_size <= sector_size` + +### Isolate + +This is where we reach brute forcing territory. `ddrescue` refers to this as +trimming. `kramer` implements the same technique. However, thanks to the +patchworking pass, this sector-at-a-time reading can be minimized, hopefully +reducing wear and overall recovery time on drives with a very short spin-down +delay. + +### Scraping + +This is the pure brute force, sector-at-a-time read. This has identical +behaviour to `ddrescue`'s scraping phase.