Compare commits
13 Commits
1e80713263
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1da28b7c48 | ||
|
|
5ea9e2afd3 | ||
|
|
5bca12406b | ||
|
|
d71f6fd8d8 | ||
|
|
e31ff33277 | ||
|
|
6ecc43dedf | ||
|
|
7a069d1f42 | ||
|
|
be08baa6fb | ||
|
|
549eafe7e0 | ||
|
|
e161069893 | ||
|
|
754ab48b92 | ||
|
|
d4094d61f0 | ||
|
|
2ce889314a |
128
CODE_OF_CONDUCT.md
Normal file
128
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,128 @@
|
||||
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, caste, color, religion, or sexual
|
||||
identity and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
- Demonstrating empathy and kindness toward other people
|
||||
- Being respectful of differing opinions, viewpoints, and experiences
|
||||
- Giving and gracefully accepting constructive feedback
|
||||
- Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
- Focusing on what is best not just for us as individuals, but for the overall
|
||||
community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
- The use of sexualized language or imagery, and sexual attention or advances of
|
||||
any kind
|
||||
- Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
- Public or private harassment
|
||||
- Publishing others' private information, such as a physical or email address,
|
||||
without their explicit permission
|
||||
- Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official email address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
<olivia.a.brooks77@gmail.com>.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series of
|
||||
actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or permanent
|
||||
ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the
|
||||
community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the
|
||||
[Contributor Covenant](https://www.contributor-covenant.org/), version 2.1,
|
||||
available at
|
||||
<https://www.contributor-covenant.org/version/2/1/code_of_conduct/>.
|
||||
|
||||
Community Impact Guidelines were inspired by
|
||||
[Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/inclusion).
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
<https://www.contributor-covenant.org/faq/>. Translations are available at
|
||||
<https://www.contributor-covenant.org/translations/>.
|
||||
@@ -23,8 +23,11 @@ features = ["derive"]
|
||||
version = "1.0"
|
||||
features = ["derive"]
|
||||
|
||||
[target.'cfg(all(unix, not(target_os = "macos")))'.dependencies]
|
||||
# Yes. For one constant, this library is required.
|
||||
# Technically, this did a bit more in early testing when I messed about
|
||||
# with unsafe ffi disasters trying to solve problems.
|
||||
#
|
||||
# And yes, I spent time tracking down the first release with that constant.
|
||||
# v0.2.25 is almost 9 years old as of writing this comment.
|
||||
# v0.2.25 is almost 9 years old as of writing this comment
|
||||
[target.'cfg(all(unix, not(target_os = "macos")))'.dependencies]
|
||||
libc = "~0.2.25"
|
||||
|
||||
2
LICENSE
2
LICENSE
@@ -1,6 +1,6 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Olivia Bridie Alexandria Millicent Ivette Brooks
|
||||
Copyright (c) 2025 Olivia Brooks
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
41
README.adoc
41
README.adoc
@@ -1,41 +0,0 @@
|
||||
= kramer
|
||||
:toc:
|
||||
|
||||
// Hello people reading the README source :)
|
||||
|
||||
== Prelude
|
||||
|
||||
VERY EARLY ALPHA -- NOT YET FUNCTIONAL
|
||||
|
||||
I needed a program to efficiently repair the data on optical discs.
|
||||
|
||||
== Goals
|
||||
|
||||
* [*] CLI Args
|
||||
** [*] Input device
|
||||
** [*] Output file (ISO 9660)
|
||||
** [*] Repair map file
|
||||
** [*] sequence_length
|
||||
** [*] brute_passes
|
||||
** [*] Sector size override?
|
||||
|
||||
* Repair Algorithm
|
||||
** Stage 1: Trial
|
||||
*** [ ] 1 - From first sector, parse forward to error.
|
||||
*** [ ] 2 - From last sector, parse backwards to error.
|
||||
*** [ ] 3 - From center of data for trial, parse forward to error or end of remaining trial domain.
|
||||
*** [ ] 4 - Stripe-skip remaining data, attempting to read largest trial domains first.
|
||||
**** [ ] If data keeps reading good, no skip will occur until an error is reached.
|
||||
** Stage 2: Isolation
|
||||
*** [ ] From largest to smallest untrustworthy sequence, attempt to read each sequence at half sequence_length.
|
||||
*** [ ] Same, but at quarter sequence_length.
|
||||
*** [ ] Same, but at eighth sequence_length.
|
||||
*** [ ] By sector, parse untrustworthy sequences from start to error, and end to error. Mark mid section for brute force.
|
||||
** Stage 3: Brute Force
|
||||
*** [ ] Desperately attempt to recover data from marked sections.
|
||||
*** [ ] Attempt for brute_passes, retrying all failed sectors.
|
||||
|
||||
* [ ] Repair Map
|
||||
** [ ] I'll figure out some kind of language for this...
|
||||
|
||||
== License
|
||||
70
README.md
Normal file
70
README.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# kramer
|
||||
|
||||
`kramer` is a data recovery utility for optical media.
|
||||
|
||||
There are plans to change the project name, but during initial development it
|
||||
will continue to be referred to as kramer.
|
||||
|
||||
This is still in very early development, so expect old maps to no longer work.
|
||||
|
||||
## Plans
|
||||
|
||||
### Core
|
||||
|
||||
- [x] Mapping
|
||||
- [x] Record the state of disc regions.
|
||||
- [x] Recover from saved map.
|
||||
- [x] Backup old map before truncating for new.
|
||||
- [ ] Recovery
|
||||
- [x] Initial / Patchworking
|
||||
Technically there is an outstanding issue with sleepy firmware here,
|
||||
but beside that this technically works.
|
||||
- [ ] Isolate
|
||||
- [ ] Scraping
|
||||
- [ ] CLI
|
||||
- [x] Arguments
|
||||
- [ ] Recovery progress
|
||||
- [ ] Recovery stats
|
||||
- [ ] Documentation, eugh.
|
||||
|
||||
### Extra
|
||||
|
||||
- [ ] i18n
|
||||
- [ ] English
|
||||
- [ ] French
|
||||
- [ ] TUI (akin to `ddrescueview`)
|
||||
- [ ] Visual status map
|
||||
- [ ] Recovery properties
|
||||
- [ ] Recovery progress
|
||||
- [ ] Recovery stats
|
||||
|
||||
## Recovery Strategy
|
||||
|
||||
### Initial Pass / Patchworking
|
||||
|
||||
Tries to read clusters of `max_buffer_size`, marking clusters with errors with
|
||||
an increasing `level`.
|
||||
|
||||
This works by halving the length of the read buffer until one of two
|
||||
conditions is met:
|
||||
|
||||
1. `max_buffer_size` has been divided by `max_buffer_subdivision`
|
||||
(like a maximum recursion depth).
|
||||
Effectively, it will keep running `max_buffer_size / isolation_depth;
|
||||
isolation_depth++;` on each pass until `isolation_depth ==
|
||||
max_buffer_subdivision`
|
||||
2. `buffer_size <= min_buffer_size`
|
||||
3. `buffer_size <= sector_size`
|
||||
|
||||
### Isolate
|
||||
|
||||
This is where we reach brute forcing territory. `ddrescue` refers to this as
|
||||
trimming. `kramer` implements the same technique. However, thanks to the
|
||||
patchworking pass, this sector-at-a-time reading can be minimized, hopefully
|
||||
reducing wear and overall recovery time on drives with a very short spin-down
|
||||
delay.
|
||||
|
||||
### Scraping (Stage::BruteForceAndDesperation)
|
||||
|
||||
This is the pure brute force, sector-at-a-time read. This has identical
|
||||
behaviour to `ddrescue`'s scraping phase.
|
||||
@@ -31,14 +31,16 @@ pub struct Args {
|
||||
#[arg(short, long, default_value_t = crate::FB_SECTOR_SIZE)]
|
||||
pub sector_size: usize,
|
||||
|
||||
// Behaviour is backwards.
|
||||
// !!! ArgAction behaviour is backwards !!!
|
||||
// ArgAction::SetFalse by default evaluates to true,
|
||||
// ArgAction::SetTrue by default evaluates to false.
|
||||
//
|
||||
/// Upon encountering a read error, reopen the source file before continuing.
|
||||
#[arg(short, long, action = ArgAction::SetTrue)]
|
||||
pub reopen_on_error: bool,
|
||||
|
||||
/// Use O_DIRECT to bypass kernel buffer when reading.
|
||||
//
|
||||
// BSD seems to support O_DIRECT, but MacOS for certain does not.
|
||||
#[cfg(all(unix, not(target_os = "macos")))]
|
||||
#[arg(short, long = "direct", action = ArgAction::SetFalse)]
|
||||
|
||||
65
src/io.rs
65
src/io.rs
@@ -1,9 +1,11 @@
|
||||
use std::fs::{File, OpenOptions};
|
||||
use std::io::{self, Seek, SeekFrom};
|
||||
use std::ops::Index;
|
||||
use std::path::Path;
|
||||
|
||||
use crate::cli::CONFIG;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::{Context, anyhow};
|
||||
|
||||
/// Get length of data stream.
|
||||
/// Physical length of data stream in bytes
|
||||
@@ -64,10 +66,21 @@ pub fn load_map_read() -> std::io::Result<File> {
|
||||
}
|
||||
|
||||
pub fn load_map_write() -> anyhow::Result<File> {
|
||||
// Attempt to check if a map exists on the disk.
|
||||
// If so, make a backup of it.
|
||||
//
|
||||
// This should be recoverable by just skipping over this error and logging a warning,
|
||||
// but for now it will be an error condition.
|
||||
if std::fs::exists(crate::path::MAP_PATH.clone())
|
||||
.context("Could not check if map exists in fs to make a backup.")?
|
||||
{
|
||||
backup(crate::path::MAP_PATH.clone())?;
|
||||
}
|
||||
|
||||
OpenOptions::new()
|
||||
.write(true)
|
||||
.create(true)
|
||||
.truncate(true) // Wipe old map. Should really make a backup first.
|
||||
.truncate(true) // Wipe old map, in case we skip over backing up the old one.
|
||||
.open(crate::path::MAP_PATH.clone())
|
||||
.with_context(|| {
|
||||
format!(
|
||||
@@ -77,9 +90,23 @@ pub fn load_map_write() -> anyhow::Result<File> {
|
||||
})
|
||||
}
|
||||
|
||||
fn backup<P: AsRef<Path>>(path: P) -> std::io::Result<()> {
|
||||
std::fs::rename(
|
||||
&path,
|
||||
format!("{}.bak", path.as_ref().to_path_buf().display()),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[repr(C, align(512))]
|
||||
pub struct DirectIOBuffer(pub [u8; crate::MAX_BUFFER_SIZE]);
|
||||
|
||||
impl DirectIOBuffer {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for DirectIOBuffer {
|
||||
fn default() -> Self {
|
||||
Self([crate::FB_NULL_VALUE; _])
|
||||
@@ -91,3 +118,37 @@ impl From<[u8; crate::MAX_BUFFER_SIZE]> for DirectIOBuffer {
|
||||
Self(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<&[u8]> for DirectIOBuffer {
|
||||
type Error = anyhow::Error;
|
||||
|
||||
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
|
||||
if value.len() > crate::MAX_BUFFER_SIZE {
|
||||
return Err(anyhow!("Provided slice is larger than MAX_BUFFER_SIZE."));
|
||||
}
|
||||
|
||||
Ok(Self(value.try_into()?))
|
||||
}
|
||||
}
|
||||
|
||||
impl AsRef<[u8]> for DirectIOBuffer {
|
||||
fn as_ref(&self) -> &[u8] {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl AsMut<[u8]> for DirectIOBuffer {
|
||||
fn as_mut(&mut self) -> &mut [u8] {
|
||||
&mut self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<Idx> Index<Idx> for DirectIOBuffer
|
||||
where
|
||||
Idx: std::slice::SliceIndex<[u8], Output = [u8]>,
|
||||
{
|
||||
type Output = Idx::Output;
|
||||
fn index(&self, index: Idx) -> &Self::Output {
|
||||
&self.0.as_slice()[index]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,8 @@ use super::{Domain, Stage};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
/// A map for data stored in memory for processing and saving to disk.
|
||||
// derived Ord impl *should* use self.domain.start to sort? Not sure.
|
||||
// Use `sort_by_key()` to be safe.
|
||||
#[derive(Clone, Copy, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub struct Cluster {
|
||||
pub domain: Domain,
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
use std::fs::File;
|
||||
use std::io::Write;
|
||||
|
||||
use crate::mapping::cluster;
|
||||
|
||||
use super::{Cluster, Domain, DomainOverlap, Stage};
|
||||
|
||||
use anyhow;
|
||||
@@ -30,7 +32,7 @@ impl Default for MapFile {
|
||||
domain: Domain::default(),
|
||||
map: vec![Cluster {
|
||||
domain: Domain::default(),
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
}],
|
||||
}
|
||||
}
|
||||
@@ -81,18 +83,8 @@ impl MapFile {
|
||||
let mut recover_stage = Stage::Damaged;
|
||||
|
||||
for cluster in self.map.iter() {
|
||||
match cluster.stage {
|
||||
Stage::Untested => return Stage::Untested,
|
||||
Stage::ForIsolation { .. } => {
|
||||
if recover_stage == Stage::Damaged || cluster.stage < recover_stage {
|
||||
// Note that recover_stage after first condition is
|
||||
// only ever Stage::ForIsolation(_), thus PartialEq,
|
||||
// PartialOrd are useful for comparing the internal value.
|
||||
recover_stage = cluster.stage
|
||||
}
|
||||
}
|
||||
Stage::Damaged => (),
|
||||
Stage::Intact => (),
|
||||
if cluster.stage < recover_stage {
|
||||
recover_stage = cluster.stage;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -116,52 +108,29 @@ impl MapFile {
|
||||
/// Defragments cluster groups.
|
||||
/// I.E. check forwards every cluster from current until stage changes,
|
||||
/// then group at once.
|
||||
pub fn defrag(&mut self) -> &mut Self {
|
||||
pub fn defrag(&mut self) {
|
||||
self.map.sort_by_key(|c| c.domain.start);
|
||||
|
||||
let mut new_map: Vec<Cluster> = vec![];
|
||||
let mut idx = 0;
|
||||
let mut master;
|
||||
while idx < self.map.len() - 1 {
|
||||
master = self.map[idx];
|
||||
|
||||
// Fetch first cluster.
|
||||
let mut start_cluster = self.map.iter().find(|c| c.domain.start == 0).unwrap();
|
||||
|
||||
// Even though this would be initialized by its first read,
|
||||
// the compiler won't stop whining, and idk how to assert that to it.
|
||||
let mut end_cluster = Cluster::default();
|
||||
let mut new_cluster: Cluster;
|
||||
|
||||
let mut stage_common: bool;
|
||||
let mut is_finished = false;
|
||||
|
||||
while !is_finished {
|
||||
stage_common = true;
|
||||
|
||||
// Start a new cluster based on the cluster following
|
||||
// the end of last new_cluster.
|
||||
new_cluster = start_cluster.to_owned();
|
||||
|
||||
// While stage is common, and not finished,
|
||||
// find each trailing cluster.
|
||||
while stage_common && !is_finished {
|
||||
end_cluster = start_cluster.to_owned();
|
||||
|
||||
if end_cluster.domain.end != self.domain.end {
|
||||
start_cluster = self
|
||||
.map
|
||||
.iter()
|
||||
.find(|c| end_cluster.domain.end == c.domain.start)
|
||||
.unwrap();
|
||||
|
||||
stage_common = new_cluster.stage == start_cluster.stage
|
||||
} else {
|
||||
is_finished = true;
|
||||
for c in self.map[idx + 1..self.map.len()].into_iter() {
|
||||
if c.stage != master.stage {
|
||||
break;
|
||||
}
|
||||
|
||||
idx += 1;
|
||||
}
|
||||
|
||||
// Set the new ending, encapsulating any clusters of common stage.
|
||||
new_cluster.domain.end = end_cluster.domain.end;
|
||||
new_map.push(new_cluster);
|
||||
master.domain.end = self.map[idx].domain.end;
|
||||
new_map.push(master);
|
||||
idx += 1;
|
||||
}
|
||||
|
||||
self.map = new_map;
|
||||
self
|
||||
}
|
||||
|
||||
/// Extend the domain of the MapFile.
|
||||
@@ -622,7 +591,7 @@ mod tests {
|
||||
let mut map = MapFile {
|
||||
map: vec![Cluster {
|
||||
domain: Domain { start: 0, end: 3 },
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
@@ -726,17 +695,16 @@ mod tests {
|
||||
|
||||
// If this fails here, there's something SERIOUSLY wrong.
|
||||
assert!(
|
||||
mf_stage == Stage::Untested,
|
||||
mf_stage == Stage::Patchwork { depth: 0 },
|
||||
"Determined stage to be {:?}, when {:?} was expeccted.",
|
||||
mf_stage,
|
||||
Stage::Untested
|
||||
Stage::Patchwork { depth: 0 }
|
||||
);
|
||||
|
||||
let stages = vec![
|
||||
Stage::Damaged,
|
||||
Stage::ForIsolation { level: 1 },
|
||||
Stage::ForIsolation { level: 0 },
|
||||
Stage::Untested,
|
||||
Stage::Patchwork { depth: 1 },
|
||||
Stage::Patchwork { depth: 0 },
|
||||
];
|
||||
|
||||
mf.map = vec![];
|
||||
@@ -762,20 +730,17 @@ mod tests {
|
||||
|
||||
mf.map = vec![
|
||||
*Cluster::default().set_stage(Stage::Damaged),
|
||||
*Cluster::default().set_stage(Stage::ForIsolation { level: 0 }),
|
||||
*Cluster::default().set_stage(Stage::ForIsolation { level: 1 }),
|
||||
*Cluster::default().set_stage(Stage::Patchwork { depth: 1 }),
|
||||
Cluster::default(),
|
||||
Cluster::default(),
|
||||
*Cluster::default().set_stage(Stage::ForIsolation { level: 1 }),
|
||||
*Cluster::default().set_stage(Stage::ForIsolation { level: 0 }),
|
||||
*Cluster::default().set_stage(Stage::Patchwork { depth: 1 }),
|
||||
*Cluster::default().set_stage(Stage::Damaged),
|
||||
];
|
||||
|
||||
let stages = vec![
|
||||
Stage::Damaged,
|
||||
Stage::ForIsolation { level: 1 },
|
||||
Stage::ForIsolation { level: 0 },
|
||||
Stage::Untested,
|
||||
Stage::Patchwork { depth: 1 },
|
||||
Stage::Patchwork { depth: 0 },
|
||||
];
|
||||
|
||||
for stage in stages {
|
||||
@@ -803,63 +768,84 @@ mod tests {
|
||||
map: vec![
|
||||
Cluster {
|
||||
domain: Domain { start: 0, end: 1 },
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 1, end: 2 },
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 2, end: 3 },
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 3, end: 4 },
|
||||
stage: Stage::ForIsolation { level: 0 },
|
||||
stage: Stage::Isolate,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 4, end: 5 },
|
||||
stage: Stage::ForIsolation { level: 0 },
|
||||
stage: Stage::Isolate,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 5, end: 6 },
|
||||
stage: Stage::ForIsolation { level: 1 },
|
||||
stage: Stage::Patchwork { depth: 1 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 6, end: 7 },
|
||||
stage: Stage::ForIsolation { level: 0 },
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 7, end: 8 },
|
||||
stage: Stage::Damaged,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 8, end: 10 },
|
||||
stage: Stage::Intact,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 10, end: 11 },
|
||||
stage: Stage::BruteForceAndDesperation,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 11, end: 12 },
|
||||
stage: Stage::BruteForceAndDesperation,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
let expected = vec![
|
||||
Cluster {
|
||||
domain: Domain { start: 0, end: 3 },
|
||||
stage: Stage::Untested,
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 3, end: 5 },
|
||||
stage: Stage::ForIsolation { level: 0 },
|
||||
stage: Stage::Isolate,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 5, end: 6 },
|
||||
stage: Stage::ForIsolation { level: 1 },
|
||||
stage: Stage::Patchwork { depth: 1 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 6, end: 7 },
|
||||
stage: Stage::ForIsolation { level: 0 },
|
||||
stage: Stage::Patchwork { depth: 0 },
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 7, end: 8 },
|
||||
stage: Stage::Damaged,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 8, end: 10 },
|
||||
stage: Stage::Intact,
|
||||
},
|
||||
Cluster {
|
||||
domain: Domain { start: 10, end: 12 },
|
||||
stage: Stage::BruteForceAndDesperation,
|
||||
},
|
||||
];
|
||||
|
||||
mf.defrag();
|
||||
mf.map.sort_by_key(|c| c.domain.start);
|
||||
|
||||
let received = mf.map;
|
||||
|
||||
|
||||
@@ -2,15 +2,17 @@ use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Clone, Copy, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum Stage {
|
||||
Untested,
|
||||
ForIsolation { level: u8 },
|
||||
// Don't mess with the order.
|
||||
Patchwork { depth: usize },
|
||||
Isolate,
|
||||
BruteForceAndDesperation,
|
||||
Damaged,
|
||||
Intact,
|
||||
}
|
||||
|
||||
impl Default for Stage {
|
||||
fn default() -> Self {
|
||||
Stage::Untested
|
||||
Stage::Patchwork { depth: 0 }
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
114
src/recovery.rs
114
src/recovery.rs
@@ -43,19 +43,15 @@ impl Recover {
|
||||
|
||||
/// Recover media.
|
||||
pub fn run(&mut self) -> anyhow::Result<usize> {
|
||||
// From start, read to end or error.
|
||||
//
|
||||
// If all data recovered, return early.
|
||||
// Else, read from end to error.
|
||||
|
||||
let mut is_finished = false;
|
||||
|
||||
while !is_finished {
|
||||
self.map.defrag();
|
||||
|
||||
match self.map.get_stage() {
|
||||
Stage::Untested => self.copy_untested()?,
|
||||
Stage::ForIsolation { .. } => todo!(),
|
||||
Stage::Patchwork { depth } => self.copy_patchwork(depth)?,
|
||||
Stage::Isolate => todo!(),
|
||||
Stage::BruteForceAndDesperation => todo!(),
|
||||
Stage::Damaged | Stage::Intact => {
|
||||
println!("Cannot recover further.");
|
||||
|
||||
@@ -89,39 +85,52 @@ impl Recover {
|
||||
}
|
||||
|
||||
/// Attempt to copy all untested blocks.
|
||||
fn copy_untested(&mut self) -> anyhow::Result<()> {
|
||||
let mut buf = DirectIOBuffer::default();
|
||||
fn copy_patchwork(&mut self, mut depth: usize) -> anyhow::Result<()> {
|
||||
let mut buf = DirectIOBuffer::new();
|
||||
let mut buf_capacity = self.get_buf_capacity() as usize;
|
||||
|
||||
for untested in self.map.get_clusters(Stage::Untested) {
|
||||
// Caching.
|
||||
let mut read_position: usize;
|
||||
let mut cluster: Cluster;
|
||||
let mut buf_capacity = self.get_buf_capacity() as usize;
|
||||
while self.map.get_stage() == (Stage::Patchwork { depth }) {
|
||||
// Order of these two expressions matters, stupid.
|
||||
buf_capacity /= depth;
|
||||
depth += 1;
|
||||
|
||||
dbg!(untested.domain);
|
||||
read_position = untested.domain.start;
|
||||
for cluster in self.map.get_clusters(Stage::Patchwork { depth }) {
|
||||
self.read_domain(buf.as_mut(), cluster.domain, buf_capacity, Stage::Isolate)?;
|
||||
}
|
||||
}
|
||||
|
||||
while read_position < untested.domain.end {
|
||||
dbg!(read_position);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
buf_capacity = buf_capacity.min(untested.domain.end - read_position);
|
||||
fn read_domain(
|
||||
&mut self,
|
||||
buf: &mut [u8],
|
||||
domain: Domain,
|
||||
mut buf_capacity: usize,
|
||||
next_stage: Stage,
|
||||
) -> anyhow::Result<()> {
|
||||
let mut cluster;
|
||||
let mut read_position = domain.start;
|
||||
|
||||
cluster = Cluster {
|
||||
domain: Domain {
|
||||
start: read_position,
|
||||
end: read_position + buf_capacity,
|
||||
},
|
||||
stage: Stage::Intact,
|
||||
};
|
||||
while read_position < domain.end {
|
||||
buf_capacity = buf_capacity.min(domain.end - read_position);
|
||||
|
||||
if let Err(err) = self.input.read_exact(&mut buf.0) {
|
||||
// If buf were zeroed out before every read, one could theoretically recover
|
||||
// part of that read given the assumption that all null values from the end to
|
||||
// the first non-null value are unread, and some further padding from the last
|
||||
// values are potentially invalid.
|
||||
//
|
||||
// That padding should have a cli arg to control it.
|
||||
cluster = Cluster {
|
||||
domain: Domain {
|
||||
start: read_position,
|
||||
end: read_position + buf_capacity,
|
||||
},
|
||||
stage: Stage::Intact,
|
||||
};
|
||||
|
||||
match self.read_sectors(buf.as_mut()) {
|
||||
Ok(bytes) => {
|
||||
self.output
|
||||
.write_all(&buf[0..bytes])
|
||||
.context("Failed to write data to output file")?;
|
||||
read_position += bytes;
|
||||
}
|
||||
Err(err) => {
|
||||
println!("Hit error: {:?}", err);
|
||||
if CONFIG.reopen_on_error {
|
||||
self.reload_input()
|
||||
@@ -135,26 +144,14 @@ impl Recover {
|
||||
.seek_relative(buf_capacity as i64)
|
||||
.context("Failed to seek output by buf_capacity to skip previous error")?;
|
||||
|
||||
// I don't remember what level was for.
|
||||
cluster.stage = Stage::ForIsolation { level: 1 };
|
||||
cluster.stage = next_stage.clone();
|
||||
}
|
||||
|
||||
if cluster.stage == Stage::Intact {
|
||||
self.output
|
||||
.write_all(&buf.0[0..buf_capacity])
|
||||
.context("Failed to write data to output file")?;
|
||||
}
|
||||
|
||||
self.map.update(cluster);
|
||||
self.map.write_to(&mut crate::io::load_map_write()?)?;
|
||||
read_position += buf_capacity;
|
||||
}
|
||||
|
||||
self.map.update(cluster);
|
||||
self.map.write_to(&mut crate::io::load_map_write()?)?;
|
||||
}
|
||||
|
||||
drop(buf);
|
||||
|
||||
self.map.write_to(&mut crate::io::load_map_write()?)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -172,6 +169,25 @@ impl Recover {
|
||||
self.input.seek(SeekFrom::Start(seek_pos))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn read_sectors(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
|
||||
let mut raw_buf = vec![crate::FB_NULL_VALUE; buf.len()];
|
||||
let result = self.input.read(&mut raw_buf);
|
||||
|
||||
if result.is_err() {
|
||||
return result;
|
||||
} else if let Ok(mut bytes) = result
|
||||
&& bytes >= CONFIG.sector_size
|
||||
{
|
||||
// Remember that this is integer division (floor division)
|
||||
bytes = (bytes / CONFIG.sector_size) * CONFIG.sector_size;
|
||||
buf.write_all(&raw_buf[..bytes]).unwrap();
|
||||
|
||||
return Ok(bytes);
|
||||
} else {
|
||||
return Ok(0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
Reference in New Issue
Block a user