1 Commits

Author SHA1 Message Date
dependabot[bot]
c493213f8c Bump clap from 4.5.31 to 4.5.38
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.31 to 4.5.38.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.5.31...clap_complete-v4.5.38)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.38
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-12 19:44:21 +00:00
20 changed files with 1178 additions and 1828 deletions

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "cargo" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"

View File

@@ -1,128 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of
any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address,
without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
<olivia.a.brooks77@gmail.com>.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the
[Contributor Covenant](https://www.contributor-covenant.org/), version 2.1,
available at
<https://www.contributor-covenant.org/version/2/1/code_of_conduct/>.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/inclusion).
For answers to common questions about this code of conduct, see the FAQ at
<https://www.contributor-covenant.org/faq/>. Translations are available at
<https://www.contributor-covenant.org/translations/>.

592
Cargo.lock generated
View File

@@ -3,114 +3,77 @@
version = 4
[[package]]
name = "addr2line"
version = "0.25.1"
name = "aho-corasick"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b5d307320b3181d6d7954e663bd7c774a838b8220fe0593c86d9fb09f498b4b"
checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916"
dependencies = [
"gimli",
]
[[package]]
name = "adler2"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa"
[[package]]
name = "anstream"
version = "0.6.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ae563653d1938f79b1ab1b5e668c87c76a9930414574a6583a7b7e11a8e6192"
dependencies = [
"anstyle",
"anstyle-parse",
"anstyle-query",
"anstyle-wincon",
"colorchoice",
"is_terminal_polyfill",
"utf8parse",
"memchr",
]
[[package]]
name = "anstyle"
version = "1.0.13"
version = "1.0.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
[[package]]
name = "anstyle-parse"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
dependencies = [
"utf8parse",
]
[[package]]
name = "anstyle-query"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c8bdeb6047d8983be085bab0ba1472e6dc604e7041dbf6fcd5e71523014fae9"
dependencies = [
"windows-sys",
]
[[package]]
name = "anstyle-wincon"
version = "3.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "403f75924867bb1033c59fbf0797484329750cfbe3c4325cd33127941fabc882"
dependencies = [
"anstyle",
"once_cell_polyfill",
"windows-sys",
]
checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9"
[[package]]
name = "anyhow"
version = "1.0.100"
version = "1.0.96"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
checksum = "6b964d184e89d9b6b67dd2715bc8e74cf3107fb2b529990c90cf517326150bf4"
[[package]]
name = "arc-swap"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69f7f8c3906b62b754cd5326047894316021dcfe5a194c8ea52bdd94934a3457"
[[package]]
name = "base62"
version = "2.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "10e52a7bcb1d6beebee21fb5053af9e3cbb7a7ed1a4909e534040e676437ab1f"
dependencies = [
"backtrace",
"rustversion",
]
[[package]]
name = "backtrace"
version = "0.3.76"
name = "base64"
version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb531853791a215d7c62a30daf0dde835f381ab5de4589cfe7c649d2cbe92bd6"
dependencies = [
"addr2line",
"cfg-if",
"libc",
"miniz_oxide",
"object",
"rustc-demangle",
"windows-link",
]
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "bitflags"
version = "2.10.0"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "bitflags"
version = "2.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f68f53c83ab957f72c32642f3868eec03eb974d1fb82e453128456482613d36"
dependencies = [
"serde_core",
"serde",
]
[[package]]
name = "cfg-if"
version = "1.0.4"
name = "bstr"
version = "1.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
checksum = "531a9155a481e2ee699d4f98f43c0ca4ff8ee1bfd55c31e9e98fb29d2b176fe0"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "clap"
version = "4.5.53"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9e340e012a1bf4935f5282ed1436d1489548e8f72308207ea5df0e23d2d03f8"
checksum = "ed93b9805f8ba930df42c2590f05453d5ec36cbb85d018868a5b24d31f6ac000"
dependencies = [
"clap_builder",
"clap_derive",
@@ -118,11 +81,10 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.53"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d76b5d13eaa18c901fd2f7fca939fefe3a0727a953561fefdf3b2922b8569d00"
checksum = "379026ff283facf611b0ea629334361c4211d1b12ee01024eec1591133b04120"
dependencies = [
"anstream",
"anstyle",
"clap_lex",
"strsim",
@@ -130,9 +92,9 @@ dependencies = [
[[package]]
name = "clap_derive"
version = "4.5.49"
version = "4.5.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a0b5487afeab2deb2ff4e03a807ad1a03ac532ff5a2cee5d86884440c7f7671"
checksum = "09176aae279615badda0765c0c0b3f6ed53f4709118af73cf4655d85d1530cd7"
dependencies = [
"heck",
"proc-macro2",
@@ -142,21 +104,82 @@ dependencies = [
[[package]]
name = "clap_lex"
version = "0.7.6"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "colorchoice"
version = "1.0.4"
name = "crossbeam-deque"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51"
dependencies = [
"crossbeam-epoch",
"crossbeam-utils",
]
[[package]]
name = "gimli"
version = "0.32.3"
name = "crossbeam-epoch"
version = "0.9.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e629b9b98ef3dd8afe6ca2bd0f89306cec16d43d907889945bc5d6687f2f13c7"
checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28"
[[package]]
name = "either"
version = "1.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7914353092ddf589ad78f25c5c1c21b7f80b0ff8621e7c814c3485b5306da9d"
[[package]]
name = "equivalent"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "glob"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8d1add55171497b4705a648c6b583acafb01d58050a51727785f0b2c8e0a2b2"
[[package]]
name = "globset"
version = "0.4.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15f1ce686646e7f1e19bf7d5533fe443a45dbfb990e00629110797578b42fb19"
dependencies = [
"aho-corasick",
"bstr",
"log",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "globwalk"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93e3af942408868f6934a7b85134a3230832b9977cf66125df2f9edcfce4ddcc"
dependencies = [
"bitflags 1.3.2",
"ignore",
"walkdir",
]
[[package]]
name = "hashbrown"
version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
[[package]]
name = "heck"
@@ -165,132 +188,314 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "is_terminal_polyfill"
version = "1.70.1"
name = "ignore"
version = "0.4.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf"
checksum = "6d89fd380afde86567dfba715db065673989d6253f42b88179abd3eae47bda4b"
dependencies = [
"crossbeam-deque",
"globset",
"log",
"memchr",
"regex-automata",
"same-file",
"walkdir",
"winapi-util",
]
[[package]]
name = "indexmap"
version = "2.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c9c992b02b5b4c94ea26e32fe5bccb7aa7d9f390ab5c1221ff895bc7ea8b652"
dependencies = [
"equivalent",
"hashbrown",
]
[[package]]
name = "itertools"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1c173a5686ce8bfa551b3563d0c2170bf24ca44da99c7ca4bfdab5418c3fe57"
dependencies = [
"either",
]
[[package]]
name = "itoa"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d75a2a4b1b190afb6f5425f10f6a8f959d2ea0b9c2b1d79553551850539e4674"
[[package]]
name = "kramer"
version = "0.1.0"
dependencies = [
"anyhow",
"clap",
"libc",
"ron",
"rust-i18n",
"serde",
]
[[package]]
name = "lazy_static"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
name = "libc"
version = "0.2.178"
version = "0.2.171"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37c93d8daa9d8a012fd8ab92f088405fb202ea0b6ab73ee2482ae66af4f42091"
checksum = "c19937216e9d3aa9956d9bb8dfc0b0c8beb6058fc4f7a4dc4d850edf86a237d6"
[[package]]
name = "memchr"
version = "2.7.6"
name = "libyml"
version = "0.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f52b00d39961fc5b2736ea853c9cc86238e165017a493d1d5c8eac6bdc4cc273"
[[package]]
name = "miniz_oxide"
version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316"
checksum = "3302702afa434ffa30847a83305f0a69d6abd74293b6554c18ec85c7ef30c980"
dependencies = [
"adler2",
"anyhow",
"version_check",
]
[[package]]
name = "object"
version = "0.37.3"
name = "log"
version = "0.4.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff76201f031d8863c38aa7f905eca4f53abbfa15f609db4277d44cd8938f33fe"
checksum = "30bde2b3dc3671ae49d8e2e9f044c7c005836e7a023ee57cffa25ab82764bb9e"
[[package]]
name = "memchr"
version = "2.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "normpath"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8911957c4b1549ac0dc74e30db9c8b0e66ddcd6d7acc33098f4c63a64a6d7ed"
dependencies = [
"memchr",
"windows-sys",
]
[[package]]
name = "once_cell"
version = "1.21.3"
version = "1.20.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "once_cell_polyfill"
version = "1.70.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4895175b425cb1f87721b59f0f286c2092bd4af812243672510e1ac53e2e0ad"
checksum = "945462a4b81e43c4e3ba96bd7b49d834c6f61198356aa858733bc4acf3cbe62e"
[[package]]
name = "proc-macro2"
version = "1.0.104"
version = "1.0.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9695f8df41bb4f3d222c95a67532365f569318332d03d5f3f67f37b20e6ebdf0"
checksum = "37d3544b3f2748c54e147655edb5025752e2303145b5aefb3c3ea2c78b973bb0"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.42"
version = "1.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
dependencies = [
"proc-macro2",
]
[[package]]
name = "ron"
version = "0.12.0"
name = "regex"
version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd490c5b18261893f14449cbd28cb9c0b637aebf161cd77900bfdedaff21ec32"
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
dependencies = [
"bitflags",
"once_cell",
"serde",
"serde_derive",
"typeid",
"unicode-ident",
"aho-corasick",
"memchr",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "rustc-demangle"
version = "0.1.26"
name = "regex-automata"
version = "0.4.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
checksum = "809e8dc61f6de73b46c85f4c96486310fe304c434cfa43669d7b40f711150908"
dependencies = [
"aho-corasick",
"memchr",
"regex-syntax",
]
[[package]]
name = "regex-syntax"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "ron"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b91f7eff05f748767f183df4320a63d6936e9c6107d97c9e6bdd9784f4289c94"
dependencies = [
"base64",
"bitflags 2.8.0",
"serde",
"serde_derive",
]
[[package]]
name = "rust-i18n"
version = "3.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71b3a6e1c6565b77c86d868eea3068b0eb39582510f9c78cfbd5c67bd36fda9b"
dependencies = [
"globwalk",
"once_cell",
"regex",
"rust-i18n-macro",
"rust-i18n-support",
"smallvec",
]
[[package]]
name = "rust-i18n-macro"
version = "3.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6180d8506af2b485ffc1eab7fc6d15678336a694f2b5efac5f2ca78c52928275"
dependencies = [
"glob",
"once_cell",
"proc-macro2",
"quote",
"rust-i18n-support",
"serde",
"serde_json",
"serde_yml",
"syn",
]
[[package]]
name = "rust-i18n-support"
version = "3.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "938f16094e2b09e893b1f85c9da251739a832d4272a5957217977da3a0713bb6"
dependencies = [
"arc-swap",
"base62",
"globwalk",
"itertools",
"lazy_static",
"normpath",
"once_cell",
"proc-macro2",
"regex",
"serde",
"serde_json",
"serde_yml",
"siphasher",
"toml",
"triomphe",
]
[[package]]
name = "rustversion"
version = "1.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f7c45b9784283f1b2e7fb61b42047c2fd678ef0960d4f6f1eba131594cc369d4"
[[package]]
name = "ryu"
version = "1.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ea1a2d0a644769cc99faa24c3ad26b379b786fe7c36fd3c546254801650e6dd"
[[package]]
name = "same-file"
version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502"
dependencies = [
"winapi-util",
]
[[package]]
name = "serde"
version = "1.0.228"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e"
dependencies = [
"serde_core",
"serde_derive",
]
[[package]]
name = "serde_core"
version = "1.0.228"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.228"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.139"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44f86c3acccc9c65b153fe1b85a3be07fe5515274ec9f0653b4a0875731c72a6"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
[[package]]
name = "serde_spanned"
version = "0.6.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87607cb1398ed59d48732e575a4c28a7a8ebf2454b964fe3f224f2afc07909e1"
dependencies = [
"serde",
]
[[package]]
name = "serde_yml"
version = "0.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59e2dd588bf1597a252c3b920e0143eb99b0f76e4e082f4c92ce34fbc9e71ddd"
dependencies = [
"indexmap",
"itoa",
"libyml",
"memchr",
"ryu",
"serde",
"version_check",
]
[[package]]
name = "siphasher"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "56199f7ddabf13fe5074ce809e7d3f42b42ae711800501b5b16ea82ad029c39d"
[[package]]
name = "smallvec"
version = "1.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcf8323ef1faaee30a44a340193b1ac6814fd9b7b4e88e9d4519a3e4abe1cfd"
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "strsim"
version = "0.11.1"
@@ -299,9 +504,9 @@ checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "syn"
version = "2.0.112"
version = "2.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21f182278bf2d2bcb3c88b1b08a37df029d71ce3d3ae26168e3c653b213b99d4"
checksum = "44d46482f1c1c87acd84dea20c1bf5ebff4c757009ed6bf19cfd36fb10e92c4e"
dependencies = [
"proc-macro2",
"quote",
@@ -309,28 +514,80 @@ dependencies = [
]
[[package]]
name = "typeid"
version = "1.0.3"
name = "toml"
version = "0.8.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc7d623258602320d5c55d1bc22793b57daff0ec7efc270ea7d55ce1d5f5471c"
checksum = "cd87a5cdd6ffab733b2f74bc4fd7ee5fff6634124999ac278c35fc78c6120148"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit",
]
[[package]]
name = "toml_datetime"
version = "0.6.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0dd7358ecb8fc2f8d014bf86f6f638ce72ba252a2c3a2572f2a795f1d23efb41"
dependencies = [
"serde",
]
[[package]]
name = "toml_edit"
version = "0.22.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17b4795ff5edd201c7cd6dca065ae59972ce77d1b80fa0a84d94950ece7d1474"
dependencies = [
"indexmap",
"serde",
"serde_spanned",
"toml_datetime",
"winnow",
]
[[package]]
name = "triomphe"
version = "0.1.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef8f7726da4807b58ea5c96fdc122f80702030edc33b35aff9190a51148ccc85"
dependencies = [
"arc-swap",
"serde",
"stable_deref_trait",
]
[[package]]
name = "unicode-ident"
version = "1.0.22"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9312f7c4f6ff9069b165498234ce8be658059c6728633667c526e27dc2cf1df5"
checksum = "adb9e6ca4f869e1180728b7950e35922a7fc6397f7b641499e8f3ef06e50dc83"
[[package]]
name = "utf8parse"
version = "0.2.2"
name = "version_check"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]]
name = "windows-link"
version = "0.2.1"
name = "walkdir"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b"
dependencies = [
"same-file",
"winapi-util",
]
[[package]]
name = "winapi-util"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys",
]
[[package]]
name = "windows-sys"
@@ -404,3 +661,12 @@ name = "windows_x86_64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "winnow"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e7f4ea97f6f78012141bcdb6a216b2609f0979ada50b20ca5b52dde2eac2bb1"
dependencies = [
"memchr",
]

View File

@@ -1,33 +1,34 @@
[package]
name = "kramer"
version = "0.1.0"
edition = "2024"
authors = ["Olivia Brooks"]
repository = "https://gitea.cutieguwu.ca/cutieguwu/kramer"
license = "MIT"
publish = false
edition = "2021"
[dependencies]
ron = ">=0.8, <0.13"
#rust-i18n = "3.1.3"
[dependencies.anyhow]
version = "1.0"
features = ["backtrace"]
# NOTE:
# = X.X.X is the version used in testing.
# Use this version for greatest compatibility.
#
# For clap info, see [dependencies.clap]
# For serde info, see [dependencies.serde]
libc = "0.2.171, ~0.2.169"
ron = "0.8.1, >=0.8, <0.9"
rust-i18n = "3.1.3, ~3.1.3"
[dependencies.clap]
version = "4.5"
features = ["derive"]
version = "4.5, ~4.5.27"
default-features = false
features = [
# From default features collection
"error-context",
"help",
"std",
"suggestions",
"usage",
# Optional features
"derive",
]
[dependencies.serde]
version = "1.0"
version = "1.0.219, ~1.0.217"
features = ["derive"]
# Yes. For one constant, this library is required.
# Technically, this did a bit more in early testing when I messed about
# with unsafe ffi disasters trying to solve problems.
#
# And yes, I spent time tracking down the first release with that constant.
# v0.2.25 is almost 9 years old as of writing this comment
[target.'cfg(all(unix, not(target_os = "macos")))'.dependencies]
libc = "~0.2.25"

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2025 Olivia Brooks
Copyright (c) 2025 Olivia Bridie Alexandria Millicent Ivette Brooks
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

41
README.adoc Normal file
View File

@@ -0,0 +1,41 @@
= kramer
:toc:
// Hello people reading the README source :)
== Prelude
VERY EARLY ALPHA -- NOT YET FUNCTIONAL
I needed a program to efficiently repair the data on optical discs.
== Goals
* [*] CLI Args
** [*] Input device
** [*] Output file (ISO 9660)
** [*] Repair map file
** [*] sequence_length
** [*] brute_passes
** [*] Sector size override?
* Repair Algorithm
** Stage 1: Trial
*** [ ] 1 - From first sector, parse forward to error.
*** [ ] 2 - From last sector, parse backwards to error.
*** [ ] 3 - From center of data for trial, parse forward to error or end of remaining trial domain.
*** [ ] 4 - Stripe-skip remaining data, attempting to read largest trial domains first.
**** [ ] If data keeps reading good, no skip will occur until an error is reached.
** Stage 2: Isolation
*** [ ] From largest to smallest untrustworthy sequence, attempt to read each sequence at half sequence_length.
*** [ ] Same, but at quarter sequence_length.
*** [ ] Same, but at eighth sequence_length.
*** [ ] By sector, parse untrustworthy sequences from start to error, and end to error. Mark mid section for brute force.
** Stage 3: Brute Force
*** [ ] Desperately attempt to recover data from marked sections.
*** [ ] Attempt for brute_passes, retrying all failed sectors.
* [ ] Repair Map
** [ ] I'll figure out some kind of language for this...
== License

View File

@@ -1,70 +0,0 @@
# kramer
`kramer` is a data recovery utility for optical media.
There are plans to change the project name, but during initial development it
will continue to be referred to as kramer.
This is still in very early development, so expect old maps to no longer work.
## Plans
### Core
- [x] Mapping
- [x] Record the state of disc regions.
- [x] Recover from saved map.
- [x] Backup old map before truncating for new.
- [ ] Recovery
- [x] Initial / Patchworking
Technically there is an outstanding issue with sleepy firmware here,
but beside that this technically works.
- [ ] Isolate
- [ ] Scraping
- [ ] CLI
- [x] Arguments
- [ ] Recovery progress
- [ ] Recovery stats
- [ ] Documentation, eugh.
### Extra
- [ ] i18n
- [ ] English
- [ ] French
- [ ] TUI (akin to `ddrescueview`)
- [ ] Visual status map
- [ ] Recovery properties
- [ ] Recovery progress
- [ ] Recovery stats
## Recovery Strategy
### Initial Pass / Patchworking
Tries to read clusters of `max_buffer_size`, marking clusters with errors with
an increasing `level`.
This works by halving the length of the read buffer until one of two
conditions is met:
1. `max_buffer_size` has been divided by `max_buffer_subdivision`
(like a maximum recursion depth).
Effectively, it will keep running `max_buffer_size / isolation_depth;
isolation_depth++;` on each pass until `isolation_depth ==
max_buffer_subdivision`
2. `buffer_size <= min_buffer_size`
3. `buffer_size <= sector_size`
### Isolate
This is where we reach brute forcing territory. `ddrescue` refers to this as
trimming. `kramer` implements the same technique. However, thanks to the
patchworking pass, this sector-at-a-time reading can be minimized, hopefully
reducing wear and overall recovery time on drives with a very short spin-down
delay.
### Scraping (Stage::BruteForceAndDesperation)
This is the pure brute force, sector-at-a-time read. This has identical
behaviour to `ddrescue`'s scraping phase.

View File

@@ -1,48 +0,0 @@
use std::path::PathBuf;
use std::sync::LazyLock;
use clap::{ArgAction, Parser};
pub static CONFIG: LazyLock<Args> = LazyLock::new(|| Args::parse());
#[derive(Parser, Debug, Clone)]
pub struct Args {
/// Path to source file or block device
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
pub input: PathBuf,
/// Path to output file. Defaults to {input}.iso
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
pub output: Option<PathBuf>,
/// Path to rescue map. Defaults to {input}.map
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
pub map: Option<PathBuf>,
/// Max number of consecutive sectors to test as a group
#[arg(short, long, default_value_t = crate::FB_CLUSTER_LEN)]
pub cluster_length: usize,
/// Number of brute force read passes
#[arg(short, long, default_value_t = 2)]
pub brute_passes: usize,
/// Sector size
#[arg(short, long, default_value_t = crate::FB_SECTOR_SIZE)]
pub sector_size: usize,
// !!! ArgAction behaviour is backwards !!!
// ArgAction::SetFalse by default evaluates to true,
// ArgAction::SetTrue by default evaluates to false.
//
/// Upon encountering a read error, reopen the source file before continuing.
#[arg(short, long, action = ArgAction::SetTrue)]
pub reopen_on_error: bool,
/// Use O_DIRECT to bypass kernel buffer when reading.
//
// BSD seems to support O_DIRECT, but MacOS for certain does not.
#[cfg(all(unix, not(target_os = "macos")))]
#[arg(short, long = "direct", action = ArgAction::SetFalse)]
pub direct_io: bool,
}

154
src/io.rs
View File

@@ -1,154 +0,0 @@
use std::fs::{File, OpenOptions};
use std::io::{self, Seek, SeekFrom};
use std::ops::Index;
use std::path::Path;
use crate::cli::CONFIG;
use anyhow::{Context, anyhow};
/// Get length of data stream.
/// Physical length of data stream in bytes
/// (multiple of sector_size, rather than actual).
///
/// This will attempt to return the stream to its current read position.
pub fn get_stream_length<S: Seek>(stream: &mut S) -> io::Result<u64> {
let pos = stream.stream_position()?;
let len = stream.seek(SeekFrom::End(0));
stream.seek(SeekFrom::Start(pos))?;
len
}
#[cfg(all(unix, not(target_os = "macos")))]
pub fn load_input() -> anyhow::Result<File> {
use std::os::unix::fs::OpenOptionsExt;
let mut options = OpenOptions::new();
options.read(true);
if CONFIG.direct_io {
options.custom_flags(libc::O_DIRECT);
}
options
.open(&CONFIG.input)
.with_context(|| format!("Failed to open input file: {}", &CONFIG.input.display()))
}
#[cfg(any(not(unix), target_os = "macos"))]
pub fn load_input() -> anyhow::Result<File> {
OpenOptions::new()
.read(true)
.open(&CONFIG.input)
.with_context(|| format!("Failed to open input file: {}", &CONFIG.input.display()))
}
pub fn load_output() -> anyhow::Result<File> {
OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(crate::path::OUTPUT_PATH.clone())
.with_context(|| {
format!(
"Failed to open/create output file at: {}",
crate::path::OUTPUT_PATH.display()
)
})
}
pub fn load_map_read() -> std::io::Result<File> {
OpenOptions::new()
.read(true)
.open(crate::path::MAP_PATH.clone())
}
pub fn load_map_write() -> anyhow::Result<File> {
// Attempt to check if a map exists on the disk.
// If so, make a backup of it.
//
// This should be recoverable by just skipping over this error and logging a warning,
// but for now it will be an error condition.
if std::fs::exists(crate::path::MAP_PATH.clone())
.context("Could not check if map exists in fs to make a backup.")?
{
backup(crate::path::MAP_PATH.clone())?;
}
OpenOptions::new()
.write(true)
.create(true)
.truncate(true) // Wipe old map, in case we skip over backing up the old one.
.open(crate::path::MAP_PATH.clone())
.with_context(|| {
format!(
"Failed to open map file at: {}",
crate::path::MAP_PATH.display()
)
})
}
fn backup<P: AsRef<Path>>(path: P) -> std::io::Result<()> {
std::fs::rename(
&path,
format!("{}.bak", path.as_ref().to_path_buf().display()),
)
}
#[derive(Debug)]
#[repr(C, align(512))]
pub struct DirectIOBuffer(pub [u8; crate::MAX_BUFFER_SIZE]);
impl DirectIOBuffer {
pub fn new() -> Self {
Self::default()
}
}
impl Default for DirectIOBuffer {
fn default() -> Self {
Self([crate::FB_NULL_VALUE; _])
}
}
impl From<[u8; crate::MAX_BUFFER_SIZE]> for DirectIOBuffer {
fn from(value: [u8; crate::MAX_BUFFER_SIZE]) -> Self {
Self(value)
}
}
impl TryFrom<&[u8]> for DirectIOBuffer {
type Error = anyhow::Error;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
if value.len() > crate::MAX_BUFFER_SIZE {
return Err(anyhow!("Provided slice is larger than MAX_BUFFER_SIZE."));
}
Ok(Self(value.try_into()?))
}
}
impl AsRef<[u8]> for DirectIOBuffer {
fn as_ref(&self) -> &[u8] {
&self.0
}
}
impl AsMut<[u8]> for DirectIOBuffer {
fn as_mut(&mut self) -> &mut [u8] {
&mut self.0
}
}
impl<Idx> Index<Idx> for DirectIOBuffer
where
Idx: std::slice::SliceIndex<[u8], Output = [u8]>,
{
type Output = Idx::Output;
fn index(&self, index: Idx) -> &Self::Output {
&self.0.as_slice()[index]
}
}

View File

@@ -1,22 +1,176 @@
mod cli;
mod io;
mod mapping;
mod path;
mod recovery;
mod mapping;
use clap::Parser;
use libc::O_DIRECT;
use mapping::MapFile;
use recovery::Recover;
use std::{
fs::{File, OpenOptions},
io::{self, Seek, SeekFrom},
os::unix::fs::OpenOptionsExt,
path::PathBuf,
};
use anyhow;
const FB_SECTOR_SIZE: usize = 2048;
const FB_CLUSTER_LEN: usize = 128;
const FB_NULL_VALUE: u8 = 0;
const FB_SECTOR_SIZE: u16 = 2048;
const MAX_BUFFER_SIZE: usize = FB_SECTOR_SIZE * FB_CLUSTER_LEN;
fn main() -> anyhow::Result<()> {
let mut recover_tool = Recover::new()?;
recover_tool.run()?;
#[derive(Parser, Debug)]
struct Args {
/// Path to source file or block device
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
input: PathBuf,
Ok(())
/// Path to output file. Defaults to {input}.iso
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
output: Option<PathBuf>,
/// Path to rescue map. Defaults to {input}.map
#[arg(short, long, value_hint = clap::ValueHint::DirPath)]
map: Option<PathBuf>,
/// Max number of consecutive sectors to test as a group
#[arg(short, long, default_value_t = 128)]
cluster_length: u16,
/// Number of brute force read passes
#[arg(short, long, default_value_t = 2)]
brute_passes: usize,
/// Sector size
#[arg(short, long, default_value_t = FB_SECTOR_SIZE)]
sector_size: u16,
}
fn main() {
let config = Args::parse();
// Live with it, prefer to use expect() here.
// I'm lazy and don't want to mess around with comparing error types.
// Thus, any error in I/O here should be treated as fatal.
let mut input: File = {
match OpenOptions::new()
.custom_flags(O_DIRECT)
.read(true)
.write(false)
.append(false)
.create(false)
.open(&config.input.as_path())
{
Ok(f) => f,
Err(err) => panic!("Failed to open input file: {:?}", err)
}
};
let mut output: File = {
// Keep this clean, make a short-lived binding.
let path = get_path(
&config.output,
&config.input.to_str().unwrap(),
"iso"
);
match OpenOptions::new()
.custom_flags(O_DIRECT)
.read(true)
.write(true)
.create(true)
.open(path)
{
Ok(f) => f,
Err(err) => panic!("Failed to open/create output file. {:?}", err)
}
};
// Check if output file is shorter than input.
// If so, autoextend the output file.
{
let input_len = get_stream_length(&mut input)
.expect("Failed to get the length of the input data.");
let output_len = get_stream_length(&mut output)
.expect("Failed to get the length of the output file.");
if output_len < input_len {
output.set_len(input_len)
.expect("Failed to autofill output file.")
}
}
let map: MapFile = {
let path = get_path(
&config.output,
&config.input.to_str().unwrap(),
"map"
);
let file = match OpenOptions::new()
.read(true)
.create(true)
.open(path)
{
Ok(f) => f,
Err(err) => panic!("Failed to open/create mapping file. {:?}", err)
};
if let Ok(map) = MapFile::try_from(file) {
map
} else {
MapFile::new(config.sector_size)
}
};
let mut recover_tool = Recover::new(config, input, output, map);
recover_tool.run();
todo!("Recovery, Map saving, and closure of all files.");
}
/// Generates a file path if one not provided.
/// source_name for fallback name.
fn get_path(
output: &Option<PathBuf>,
source_name: &str,
extention: &str
) -> PathBuf {
if let Some(f) = output {
f.to_owned()
} else {
PathBuf::from(format!(
"{:?}.{}",
source_name,
extention,
))
.as_path()
.to_owned()
}
}
/// Get length of data stream.
/// Physical length of data stream in bytes
/// (multiple of sector_size, rather than actual).
fn get_stream_length<S: Seek>(input: &mut S) -> io::Result<u64> {
let len = input.seek(SeekFrom::End(0))?;
let _ = input.seek(SeekFrom::Start(0));
Ok(len)
}
#[cfg(test)]
#[allow(unused)]
mod tests {
use super::*;
// Test for get_path
// Need to determine how to package files to test with, or at least
// how to test with PathBuf present.
// Test must also check unwrapping of file name, not just generation.
// Test for get_stream_length
// Need to determine how to test with Seek-able objects.
}

446
src/mapping.rs Normal file
View File

@@ -0,0 +1,446 @@
use ron::de::{from_reader, SpannedError};
use serde::Deserialize;
use std::fs::File;
use crate::FB_SECTOR_SIZE;
/// Domain, in sectors.
/// Requires sector_size to be provided elsewhere for conversion to bytes.
#[derive(Clone, Copy, Debug, Deserialize, PartialEq)]
pub struct Domain {
pub start: usize,
pub end: usize,
}
impl Default for Domain {
fn default() -> Self {
Domain { start: 0, end: 1 }
}
}
impl Domain {
/// Return length of domain in sectors.
pub fn len(self) -> usize {
self.end - self.start
}
}
/// A map for data stored in memory for processing and saving to disk.
#[derive(Clone, Copy, Debug, Deserialize, PartialEq)]
pub struct Cluster {
domain: Domain,
stage: Stage,
}
impl Default for Cluster {
fn default() -> Self {
Cluster {
domain: Domain::default(),
stage: Stage::default()
}
}
}
impl Cluster {
/// Breaks apart into a vec of clusters,
/// each of cluster_size, excepting last.
pub fn subdivide(&mut self, cluster_len: usize) -> Vec<Cluster> {
let domain_len = self.domain.len();
let mut start = self.domain.start;
let mut clusters: Vec<Cluster> = vec![];
for _ in 0..(domain_len as f64 / cluster_len as f64).floor() as usize {
clusters.push(Cluster {
domain: Domain {
start,
end: start + cluster_len,
},
stage: self.stage,
});
start += cluster_len;
}
clusters.push(Cluster {
domain: Domain {
start,
end: self.domain.end,
},
stage: self.stage,
});
clusters
}
pub fn set_stage(&mut self, stage: Stage) -> &mut Self {
self.stage = stage;
self
}
}
#[derive(Clone, Copy, Debug, Deserialize, PartialEq, PartialOrd)]
pub enum Stage {
Untested,
ForIsolation(u8),
Damaged,
}
impl Default for Stage {
fn default() -> Self {
Stage::Untested
}
}
#[derive(Clone, Debug, Deserialize, PartialEq)]
pub struct MapFile {
pub sector_size: u16,
pub domain: Domain,
pub map: Vec<Cluster>,
}
impl TryFrom<File> for MapFile {
type Error = SpannedError;
fn try_from(file: File) -> Result<Self, Self::Error> {
from_reader(file)
}
}
impl Default for MapFile {
fn default() -> Self {
MapFile {
sector_size: FB_SECTOR_SIZE,
domain: Domain::default(),
map: vec![Cluster {
domain: Domain::default(),
stage: Stage::Untested,
}],
}
}
}
impl MapFile {
pub fn new(sector_size: u16) -> Self {
MapFile::default()
.set_sector_size(sector_size)
.to_owned()
}
pub fn set_sector_size(&mut self, sector_size: u16) -> &mut Self {
self.sector_size = sector_size;
self
}
/// Recalculate cluster mappings.
fn update(&mut self, new_cluster: Cluster) -> &mut Self {
let mut new_map: Vec<Cluster> = vec![Cluster::from(new_cluster.to_owned())];
for map_cluster in self.map.iter() {
let mut map_cluster = *map_cluster;
// If new_cluster doesn't start ahead and ends short, map_cluster is forgotten.
if new_cluster.domain.start < map_cluster.domain.start
&& new_cluster.domain.end < map_cluster.domain.end {
/*
new_cluster overlaps the start of map_cluster,
but ends short of map_cluster end.
ACTION: Crop map_cluster to start at end of new_cluster.
*/
map_cluster.domain.start = new_cluster.domain.end;
new_map.push(map_cluster);
} else if new_cluster.domain.end < map_cluster.domain.end {
/*
new_cluster starts within map_cluster domain.
ACTION: Crop
*/
let domain_end = map_cluster.domain.end;
// Crop current object.
map_cluster.domain.end = new_cluster.domain.start;
new_map.push(map_cluster);
if new_cluster.domain.end < map_cluster.domain.end {
/*
new_cluster is within map_cluster.
ACTION: Crop & Fracture map_cluster
NOTE: Crop completed above.
*/
new_map.push(Cluster {
domain: Domain {
start: new_cluster.domain.end,
end: domain_end,
},
stage: map_cluster.stage.to_owned()
});
}
} else {
/*
No overlap.
ACTION: Transfer
*/
new_map.push(map_cluster);
}
}
self.map = new_map;
self
}
/// Get current recovery stage.
pub fn get_stage(&self) -> Stage {
let mut recover_stage = Stage::Damaged;
for cluster in self.map.iter() {
match cluster.stage {
Stage::Untested => return Stage::Untested,
Stage::ForIsolation(_) => {
if recover_stage == Stage::Damaged
|| cluster.stage < recover_stage {
// Note that recover_stage after first condition is
// only ever Stage::ForIsolation(_), thus PartialEq,
// PartialOrd are useful for comparing the internal value.
recover_stage = cluster.stage
}
},
Stage::Damaged => (),
}
}
recover_stage
}
/// Get clusters of common stage.
pub fn get_clusters(&self, stage: Stage) -> Vec<Cluster> {
self.map.iter()
.filter_map(|mc| {
if mc.stage == stage { Some(mc.to_owned()) } else { None }
})
.collect()
}
/// Defragments cluster groups.
/// I.E. check forwards every cluster from current until stage changes,
/// then group at once.
fn defrag(&mut self) -> &mut Self {
let mut new_map: Vec<Cluster> = vec![];
// Fetch first cluster.
let mut start_cluster = self.map.iter()
.find(|c| c.domain.start == 0)
.unwrap();
// Even though this would be initialized by its first read,
// the compiler won't stop whining, and idk how to assert that to it.
let mut end_cluster = Cluster::default();
let mut new_cluster: Cluster;
let mut stage_common: bool;
let mut is_finished = false;
while !is_finished {
stage_common = true;
// Start a new cluster based on the cluster following
// the end of last new_cluster.
new_cluster = start_cluster.to_owned();
// While stage is common, and not finished,
// find each trailing cluster.
while stage_common && !is_finished {
end_cluster = start_cluster.to_owned();
if end_cluster.domain.end != self.domain.end {
start_cluster = self.map.iter()
.find(|c| end_cluster.domain.end == c.domain.start)
.unwrap();
stage_common = new_cluster.stage == start_cluster.stage
} else {
is_finished = true;
}
}
// Set the new ending, encapsulating any clusters of common stage.
new_cluster.domain.end = end_cluster.domain.end;
new_map.push(new_cluster);
}
self.map = new_map;
self
}
}
#[cfg(test)]
mod tests {
use super::*;
// Test for Cluster::subdivide()
// Test for MapFile::update()
// Test for MapFile::get_stage()
#[test]
fn test_get_stage() {
use std::vec;
let mut mf = MapFile::default();
let mut mf_stage = mf.get_stage();
// If this fails here, there's something SERIOUSLY wrong.
assert!(
mf_stage == Stage::Untested,
"Determined stage to be {:?}, when {:?} was expeccted.",
mf_stage, Stage::Untested
);
let stages = vec![
Stage::Damaged,
Stage::ForIsolation(1),
Stage::ForIsolation(0),
Stage::Untested,
];
mf.map = vec![];
for stage in stages {
mf.map.push(*Cluster::default().set_stage(stage));
mf_stage = mf.get_stage();
assert!(
stage == mf_stage,
"Expected stage to be {:?}, determined {:?} instead.",
stage, mf_stage
)
}
}
// Test for MapFile::get_clusters()
#[test]
fn test_get_clusters() {
let mut mf = MapFile::default();
mf.map = vec![
*Cluster::default().set_stage(Stage::Damaged),
*Cluster::default().set_stage(Stage::ForIsolation(0)),
*Cluster::default().set_stage(Stage::ForIsolation(1)),
Cluster::default(),
Cluster::default(),
*Cluster::default().set_stage(Stage::ForIsolation(1)),
*Cluster::default().set_stage(Stage::ForIsolation(0)),
*Cluster::default().set_stage(Stage::Damaged),
];
let stages = vec![
Stage::Damaged,
Stage::ForIsolation(1),
Stage::ForIsolation(0),
Stage::Untested,
];
for stage in stages {
let expected = vec![
*Cluster::default().set_stage(stage),
*Cluster::default().set_stage(stage),
];
let recieved = mf.get_clusters(stage);
assert!(
expected == recieved,
"Expected clusters {:?}, got {:?}.",
expected, recieved
)
}
}
// Test for MapFile::defrag()
#[test]
fn test_defrag() {
let mut mf = MapFile {
sector_size: 1,
domain: Domain { start: 0, end: 8 },
map: vec![
Cluster {
domain: Domain { start: 0, end: 1 },
stage: Stage::Untested,
},
Cluster {
domain: Domain { start: 1, end: 2 },
stage: Stage::Untested,
},
Cluster {
domain: Domain { start: 2, end: 3 },
stage: Stage::Untested,
},
Cluster {
domain: Domain { start: 3, end: 4 },
stage: Stage::ForIsolation(0),
},
Cluster {
domain: Domain { start: 4, end: 5 },
stage: Stage::ForIsolation(0),
},
Cluster {
domain: Domain { start: 5, end: 6 },
stage: Stage::ForIsolation(1),
},
Cluster {
domain: Domain { start: 6, end: 7 },
stage: Stage::ForIsolation(0),
},
Cluster {
domain: Domain { start: 7, end: 8 },
stage: Stage::Damaged,
},
],
};
let expected = vec![
Cluster {
domain: Domain { start: 0, end: 3 },
stage: Stage::Untested,
},
Cluster {
domain: Domain { start: 3, end: 5 },
stage: Stage::ForIsolation(0),
},
Cluster {
domain: Domain { start: 5, end: 6 },
stage: Stage::ForIsolation(1),
},
Cluster {
domain: Domain { start: 6, end: 7 },
stage: Stage::ForIsolation(0),
},
Cluster {
domain: Domain { start: 7, end: 8 },
stage: Stage::Damaged,
},
];
mf.defrag();
let recieved = mf.map;
assert!(
expected == recieved,
"Expected {:?} after defragging, got {:?}.",
expected, recieved
)
}
}

View File

@@ -1,68 +0,0 @@
use super::{Domain, Stage};
use serde::{Deserialize, Serialize};
/// A map for data stored in memory for processing and saving to disk.
// derived Ord impl *should* use self.domain.start to sort? Not sure.
// Use `sort_by_key()` to be safe.
#[derive(Clone, Copy, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct Cluster {
pub domain: Domain,
pub stage: Stage,
}
impl Default for Cluster {
fn default() -> Self {
Cluster {
domain: Domain::default(),
stage: Stage::default(),
}
}
}
impl Cluster {
/// Breaks apart into a vec of clusters,
/// each of cluster_size, excepting last.
#[allow(dead_code)]
pub fn subdivide(&mut self, cluster_len: usize) -> Vec<Cluster> {
let domain_len = self.domain.len();
let mut start = self.domain.start;
let mut clusters: Vec<Cluster> = vec![];
for _ in 0..(domain_len / cluster_len) {
clusters.push(Cluster {
domain: Domain {
start,
end: start + cluster_len,
},
stage: self.stage,
});
start += cluster_len;
}
clusters.push(Cluster {
domain: Domain {
start,
end: self.domain.end,
},
stage: self.stage,
});
clusters
}
// This is used in unit tests at present. Ideally it probably shouldn't exist.
#[allow(dead_code)]
pub fn set_stage(&mut self, stage: Stage) -> &mut Self {
self.stage = stage;
self
}
}
#[cfg(test)]
mod tests {
use super::*;
// Test for Cluster::subdivide()
}

View File

@@ -1,56 +0,0 @@
use serde::{Deserialize, Serialize};
/// Domain, in sectors.
/// Requires sector_size to be provided elsewhere for conversion to bytes.
#[derive(Clone, Copy, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct Domain {
pub start: usize,
pub end: usize,
}
impl Default for Domain {
fn default() -> Self {
Domain { start: 0, end: 1 }
}
}
impl Domain {
/// Return length of domain in sectors.
#[allow(dead_code)]
pub fn len(self) -> usize {
self.end - self.start
}
/// Returns the type of overlap between this domain and another.
pub fn overlap(&self, other: Domain) -> DomainOverlap {
if self.end <= other.start || other.end <= self.start {
// Cases 7, 8, 12, and 13 of map::tests::test_update
DomainOverlap::None
} else if other.start >= self.start && other.end <= self.end {
// Cases 3, 5, 9, and 11 of map::tests::test_update
DomainOverlap::SelfEngulfsOther
} else if other.start <= self.start && other.end >= self.end {
// Cases 4, 6, and 10 of map::tests::test_update
DomainOverlap::OtherEngulfsSelf
} else if self.start < other.start {
// Case 1 of map::tests::test_update
DomainOverlap::OtherOverlapsEnd
} else {
// Case 2 of map::tests::test_update
DomainOverlap::OtherOverlapsStart
}
}
}
pub enum DomainOverlap {
None,
SelfEngulfsOther,
OtherEngulfsSelf,
OtherOverlapsStart,
OtherOverlapsEnd,
}
#[cfg(test)]
mod tests {
use super::*;
}

View File

@@ -1,859 +0,0 @@
use std::fs::File;
use std::io::Write;
use crate::mapping::cluster;
use super::{Cluster, Domain, DomainOverlap, Stage};
use anyhow;
use ron::de::from_reader;
use ron::error::SpannedError;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
pub struct MapFile {
pub sector_size: usize,
pub domain: Domain,
pub map: Vec<Cluster>,
}
impl TryFrom<File> for MapFile {
type Error = SpannedError;
fn try_from(file: File) -> Result<Self, Self::Error> {
from_reader(file)
}
}
impl Default for MapFile {
fn default() -> Self {
MapFile {
sector_size: crate::FB_SECTOR_SIZE,
domain: Domain::default(),
map: vec![Cluster {
domain: Domain::default(),
stage: Stage::Patchwork { depth: 0 },
}],
}
}
}
impl MapFile {
pub fn new(sector_size: usize) -> Self {
MapFile::default().set_sector_size(sector_size).to_owned()
}
pub fn set_sector_size(&mut self, sector_size: usize) -> &mut Self {
self.sector_size = sector_size;
self
}
/// Recalculate cluster mappings.
pub fn update(&mut self, new: Cluster) -> &mut Self {
let mut map: Vec<Cluster> = vec![Cluster::from(new.clone())];
for old in self.map.iter() {
let mut old = *old;
match new.domain.overlap(old.domain) {
DomainOverlap::None => map.push(old),
DomainOverlap::SelfEngulfsOther => (),
DomainOverlap::OtherEngulfsSelf => {
other_engulfs_self_update(new, &mut old, &mut map)
}
DomainOverlap::OtherOverlapsEnd => {
// Case 1
old.domain.start = new.domain.end;
map.push(old);
}
DomainOverlap::OtherOverlapsStart => {
// Case 2
old.domain.end = new.domain.start;
map.push(old);
}
};
}
self.map = map;
self
}
/// Get current recovery stage.
pub fn get_stage(&self) -> Stage {
let mut recover_stage = Stage::Damaged;
for cluster in self.map.iter() {
if cluster.stage < recover_stage {
recover_stage = cluster.stage;
}
}
recover_stage
}
/// Get clusters of common stage.
pub fn get_clusters(&self, stage: Stage) -> Vec<Cluster> {
self.map
.iter()
.filter_map(|mc| {
if mc.stage == stage {
Some(mc.to_owned())
} else {
None
}
})
.collect()
}
/// Defragments cluster groups.
/// I.E. check forwards every cluster from current until stage changes,
/// then group at once.
pub fn defrag(&mut self) {
self.map.sort_by_key(|c| c.domain.start);
let mut new_map: Vec<Cluster> = vec![];
let mut idx = 0;
let mut master;
while idx < self.map.len() - 1 {
master = self.map[idx];
for c in self.map[idx + 1..self.map.len()].into_iter() {
if c.stage != master.stage {
break;
}
idx += 1;
}
master.domain.end = self.map[idx].domain.end;
new_map.push(master);
idx += 1;
}
self.map = new_map;
}
/// Extend the domain of the MapFile.
/// Returns None if the domain cannot be changed or is unchanged.
/// Returns the delta of the previous domain end and the new end.
pub fn extend(&mut self, end: usize) -> Option<usize> {
if end <= self.domain.end {
return None;
}
let old_end = self.domain.end;
let delta = end - old_end;
self.domain.end = end;
// Add new data as untested.
self.update(Cluster {
domain: Domain {
start: old_end,
end: self.domain.end,
},
..Default::default()
});
Some(delta)
}
/// Writes the map to the provided item implementing `Write` trait.
/// Usually a file.
pub fn write_to<W: Write>(&mut self, file: &mut W) -> anyhow::Result<usize> {
self.defrag();
let written_bytes = file.write(
ron::ser::to_string_pretty(
self,
ron::ser::PrettyConfig::new()
.new_line("\n".to_string())
.struct_names(true),
)?
.as_bytes(),
)?;
Ok(written_bytes)
}
}
// This is split out for a shred of readability.
fn other_engulfs_self_update(new: Cluster, old: &mut Cluster, map: &mut Vec<Cluster>) {
if new.domain.start == old.domain.start {
// Case 6 of map::tests::test_update
old.domain.start = new.domain.end;
} else {
// Case 4 and part of 10
let old_end = old.domain.end;
old.domain.end = new.domain.start;
if new.domain.end != old_end {
// Case 10 of map::tests::test_update
map.push(Cluster {
domain: Domain {
start: new.domain.end,
end: old_end,
},
stage: old.stage,
})
}
}
map.push(old.to_owned())
}
#[cfg(test)]
mod tests {
use super::*;
/// Test for MapFile::update()
#[test]
fn update_1_new_overlaps_start() {
// Case 1:
// |----new----|
// |----old----|
//
// | --> |-old-|
// Solution: old.start = new.end
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_2_new_overlaps_end() {
// Case 2:
// |----new----|
// |----old----|
//
// |-old-| <-- |
// Solution: old.end = new.start
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
},
Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_3_new_engulfs_common_end() {
// Case 3:
// |----new----|
// |--old--|
//
// Solution: Remove old.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}]
);
}
/// Test for MapFile::update()
#[test]
fn update_4_old_engulfs_common_end() {
// Case 4:
// |--new--|
// |-----old-----|
//
// |-old-| <---- |
// Solution: old.end = new.start
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
},
Cluster {
domain: Domain { start: 1, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_5_new_engulfs_common_start() {
// Case 5:
// |-----new----|
// |--old--|
//
// Solution: Remove old.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}]
);
}
/// Test for MapFile::update()
#[test]
fn update_6_old_engulfs_common_start() {
// Case 6:
// |--new--|
// |-----old-----|
//
// | ----> |-old-|
// Solution: old.start = new.end
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_7_new_precedes() {
// Case 7:
// |--new--|
// |--old--|
//
// Solution: Leave unchanged.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_8_new_trails() {
// Case 8:
// |--new--|
// |--old--|
// Solution: Leave unchanged.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 2 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_9_new_engulfs() {
// Case 9:
// |-----new-----|
// |--old--|
//
// Solution: Remove old.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 1, end: 2 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}]
);
}
/// Test for MapFile::update()
#[test]
fn update_10_old_engulfs() {
// Case 10:
// |--new--|
// |--------------old--------------|
//
// |----old----| <---- |
// + |--fracture-|
// Solution: old.end = new.start
// && fracture:
// with fracture.start = new.end
// && fracture.end = old.original_end
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 1, end: 2 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
},
Cluster {
domain: Domain { start: 1, end: 2 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_11_common_start_and_end() {
// Case 11:
// |--new--|
// |--old--|
//
// Solution: Remove old.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 3 },
stage: Stage::Patchwork { depth: 0 },
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 3 },
stage: Stage::Intact,
});
map.map.sort();
assert_eq!(
map.map,
vec![Cluster {
domain: Domain { start: 0, end: 3 },
stage: Stage::Intact
}]
);
}
/// Test for MapFile::update()
#[test]
fn update_12_new_out_of_range_preceding() {
// Case 12:
// |--new--|
// |--old--|
//
// Solution: Leave Unchanged.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::update()
#[test]
fn update_13_new_out_of_range_trailing() {
// Case 13:
// |--new--|
// |--old--|
//
// Solution: Leave Unchanged.
let mut map = MapFile {
map: vec![Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
}],
..Default::default()
};
map.update(Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
});
map.map.sort();
assert_eq!(
map.map,
vec![
Cluster {
domain: Domain { start: 0, end: 1 },
..Default::default()
},
Cluster {
domain: Domain { start: 2, end: 3 },
..Default::default()
}
]
);
}
/// Test for MapFile::get_stage()
#[test]
fn get_stage() {
let mut mf = MapFile::default();
let mut mf_stage = mf.get_stage();
// If this fails here, there's something SERIOUSLY wrong.
assert!(
mf_stage == Stage::Patchwork { depth: 0 },
"Determined stage to be {:?}, when {:?} was expeccted.",
mf_stage,
Stage::Patchwork { depth: 0 }
);
let stages = vec![
Stage::Damaged,
Stage::Patchwork { depth: 1 },
Stage::Patchwork { depth: 0 },
];
mf.map = vec![];
for stage in stages {
mf.map.push(*Cluster::default().set_stage(stage));
mf_stage = mf.get_stage();
assert!(
stage == mf_stage,
"Expected stage to be {:?}, determined {:?} instead.",
stage,
mf_stage
)
}
}
/// Test for MapFile::get_clusters()
#[test]
fn get_clusters() {
let mut mf = MapFile::default();
mf.map = vec![
*Cluster::default().set_stage(Stage::Damaged),
*Cluster::default().set_stage(Stage::Patchwork { depth: 1 }),
Cluster::default(),
Cluster::default(),
*Cluster::default().set_stage(Stage::Patchwork { depth: 1 }),
*Cluster::default().set_stage(Stage::Damaged),
];
let stages = vec![
Stage::Damaged,
Stage::Patchwork { depth: 1 },
Stage::Patchwork { depth: 0 },
];
for stage in stages {
let expected = vec![
*Cluster::default().set_stage(stage),
*Cluster::default().set_stage(stage),
];
let received = mf.get_clusters(stage);
assert!(
expected == received,
"Expected clusters {:?}, got {:?}.",
expected,
received
)
}
}
/// Test for MapFile::defrag()
#[test]
fn defrag() {
let mut mf = MapFile {
sector_size: 1,
domain: Domain { start: 0, end: 8 },
map: vec![
Cluster {
domain: Domain { start: 0, end: 1 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 1, end: 2 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 2, end: 3 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 3, end: 4 },
stage: Stage::Isolate,
},
Cluster {
domain: Domain { start: 4, end: 5 },
stage: Stage::Isolate,
},
Cluster {
domain: Domain { start: 5, end: 6 },
stage: Stage::Patchwork { depth: 1 },
},
Cluster {
domain: Domain { start: 6, end: 7 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 7, end: 8 },
stage: Stage::Damaged,
},
Cluster {
domain: Domain { start: 8, end: 10 },
stage: Stage::Intact,
},
Cluster {
domain: Domain { start: 10, end: 11 },
stage: Stage::BruteForceAndDesperation,
},
Cluster {
domain: Domain { start: 11, end: 12 },
stage: Stage::BruteForceAndDesperation,
},
],
};
let expected = vec![
Cluster {
domain: Domain { start: 0, end: 3 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 3, end: 5 },
stage: Stage::Isolate,
},
Cluster {
domain: Domain { start: 5, end: 6 },
stage: Stage::Patchwork { depth: 1 },
},
Cluster {
domain: Domain { start: 6, end: 7 },
stage: Stage::Patchwork { depth: 0 },
},
Cluster {
domain: Domain { start: 7, end: 8 },
stage: Stage::Damaged,
},
Cluster {
domain: Domain { start: 8, end: 10 },
stage: Stage::Intact,
},
Cluster {
domain: Domain { start: 10, end: 12 },
stage: Stage::BruteForceAndDesperation,
},
];
mf.defrag();
mf.map.sort_by_key(|c| c.domain.start);
let received = mf.map;
assert!(
expected == received,
"Expected {:?} after defragging, got {:?}.",
expected,
received
)
}
}

View File

@@ -1,12 +0,0 @@
#![allow(unused_imports)]
pub mod cluster;
pub mod domain;
pub mod map;
pub mod prelude;
pub mod stage;
pub use cluster::Cluster;
pub use domain::{Domain, DomainOverlap};
pub use map::MapFile;
pub use stage::Stage;

View File

@@ -1,6 +0,0 @@
#![allow(unused_imports)]
pub use super::cluster::Cluster;
pub use super::domain::{Domain, DomainOverlap};
pub use super::map::MapFile;
pub use super::stage::Stage;

View File

@@ -1,22 +0,0 @@
use serde::{Deserialize, Serialize};
#[derive(Clone, Copy, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum Stage {
// Don't mess with the order.
Patchwork { depth: usize },
Isolate,
BruteForceAndDesperation,
Damaged,
Intact,
}
impl Default for Stage {
fn default() -> Self {
Stage::Patchwork { depth: 0 }
}
}
#[cfg(test)]
mod tests {
use super::*;
}

View File

@@ -1,40 +0,0 @@
use std::path::{Path, PathBuf};
use std::sync::LazyLock;
use crate::cli::CONFIG;
use anyhow::{self, Context};
/// Generates a file path if one not provided.
/// root_path for fallback name.
pub fn get_path<P>(path: &Option<P>, root_path: &P, extension: &str) -> anyhow::Result<PathBuf>
where
P: AsRef<Path>,
{
if let Some(f) = path {
return Ok(f.as_ref().to_path_buf());
}
Ok(PathBuf::from(format!(
"{}.{}",
root_path
.as_ref()
.to_str()
.context("source_name path was not UTF-8 valid.")?,
extension
))
.as_path()
.to_owned())
}
pub static MAP_PATH: LazyLock<PathBuf> = LazyLock::new(|| {
get_path(&CONFIG.map, &CONFIG.input, "map")
.context("Failed to generate map path.")
.unwrap()
});
pub static OUTPUT_PATH: LazyLock<PathBuf> = LazyLock::new(|| {
get_path(&CONFIG.output, &CONFIG.input, "iso")
.context("Failed to generate output path.")
.unwrap()
});

View File

@@ -1,199 +1,110 @@
use std::fs::File;
use std::io::{BufWriter, Read, Seek, SeekFrom, Write};
use std::usize;
use std::{
io::{BufReader, BufWriter},
fs::File,
};
use anyhow::Context;
use crate::{
Args,
mapping::{Cluster, MapFile, Stage},
};
use crate::cli::CONFIG;
use crate::io::DirectIOBuffer;
use crate::mapping::prelude::*;
#[derive(Debug)]
pub struct Recover {
input: File,
buf_capacity: usize,
config: Args,
input: BufReader<File>,
output: BufWriter<File>,
map: MapFile,
stage: Stage,
}
impl Recover {
pub fn new() -> anyhow::Result<Self> {
let input: File = crate::io::load_input()?;
let output: File = crate::io::load_output()?;
let map: MapFile = {
if let Ok(f) = crate::io::load_map_read()
&& let Ok(map_file) = MapFile::try_from(f)
{
map_file
} else {
MapFile::new(CONFIG.sector_size)
}
};
pub fn new(
config: Args,
input: File,
output: File,
map: MapFile,
) -> Self {
let stage = map.get_stage();
// Temporarily make buffer length one sector.
let buf_capacity = config.sector_size as usize;
let mut r = Recover {
input,
output: BufWriter::with_capacity(map.domain.end as usize, output),
buf_capacity,
config,
input: BufReader::with_capacity(
buf_capacity,
input,
),
output: BufWriter::with_capacity(
buf_capacity,
output,
),
map,
stage: stage,
};
r.restore()?;
Ok(r)
// Ensure that buffer capacity is adjusted based on progress.
r.set_buf_capacity();
r
}
/// Recover media.
pub fn run(&mut self) -> anyhow::Result<usize> {
pub fn run(&mut self) -> &mut Self {
let mut is_finished = false;
while !is_finished {
self.map.defrag();
match self.map.get_stage() {
Stage::Patchwork { depth } => self.copy_patchwork(depth)?,
Stage::Isolate => todo!(),
Stage::BruteForceAndDesperation => todo!(),
Stage::Damaged | Stage::Intact => {
Stage::Untested => { self.copy_untested(); },
Stage::ForIsolation(level) => { self.copy_isolate(level); },
Stage::Damaged => {
println!("Cannot recover further.");
is_finished = true
}
};
// Need to reset seek position between algorithms.
self.input
.rewind()
.context("Failed to reset input seek position.")?;
self.output
.rewind()
.context("Failed to reset output seek position")?;
},
}
}
// Temporary.
let recovered_bytes = usize::MIN;
Ok(recovered_bytes)
}
/// Restore current progress based on MapFile.
/// Also updates MapFile if needed, such as to extend the MapFile domain.
pub fn restore(&mut self) -> anyhow::Result<()> {
self.map.extend(
crate::io::get_stream_length(&mut self.input)
.context("Failed to get input stream length.")? as usize,
);
Ok(())
self
}
/// Attempt to copy all untested blocks.
fn copy_patchwork(&mut self, mut depth: usize) -> anyhow::Result<()> {
let mut buf = DirectIOBuffer::new();
let mut buf_capacity = self.get_buf_capacity() as usize;
fn copy_untested(&mut self) -> &mut Self {
while self.map.get_stage() == (Stage::Patchwork { depth }) {
// Order of these two expressions matters, stupid.
buf_capacity /= depth;
depth += 1;
let mut untested: Vec<Cluster> = vec![];
for cluster in self.map.get_clusters(Stage::Patchwork { depth }) {
self.read_domain(buf.as_mut(), cluster.domain, buf_capacity, Stage::Isolate)?;
}
for cluster in self.map.get_clusters(Stage::Untested).iter_mut() {
untested.append(&mut cluster.subdivide(self.map.sector_size as usize));
}
Ok(())
todo!("Read and save data.");
self
}
fn read_domain(
&mut self,
buf: &mut [u8],
domain: Domain,
mut buf_capacity: usize,
next_stage: Stage,
) -> anyhow::Result<()> {
let mut cluster;
let mut read_position = domain.start;
/// Attempt to copy blocks via isolation at pass level.
fn copy_isolate(&mut self, level: u8) -> &mut Self {
while read_position < domain.end {
buf_capacity = buf_capacity.min(domain.end - read_position);
todo!();
cluster = Cluster {
domain: Domain {
start: read_position,
end: read_position + buf_capacity,
},
stage: Stage::Intact,
};
match self.read_sectors(buf.as_mut()) {
Ok(bytes) => {
self.output
.write_all(&buf[0..bytes])
.context("Failed to write data to output file")?;
read_position += bytes;
}
Err(err) => {
println!("Hit error: {:?}", err);
if CONFIG.reopen_on_error {
self.reload_input()
.context("Failed to reload input file after previous error")?;
}
self.input
.seek_relative(buf_capacity as i64)
.context("Failed to seek input by buf_capacity to skip previous error")?;
self.output
.seek_relative(buf_capacity as i64)
.context("Failed to seek output by buf_capacity to skip previous error")?;
cluster.stage = next_stage.clone();
}
}
self.map.update(cluster);
self.map.write_to(&mut crate::io::load_map_write()?)?;
}
Ok(())
self
}
/// Set buffer capacity as cluster length in bytes.
/// Set buffer capacities as cluster length in bytes.
/// Varies depending on the recovery stage.
fn get_buf_capacity(&mut self) -> u64 {
crate::MAX_BUFFER_SIZE.min(CONFIG.sector_size * CONFIG.cluster_length) as u64
}
fn set_buf_capacity(&mut self) -> &mut Self {
self.buf_capacity = (self.config.sector_size * self.config.cluster_length) as usize;
/// Reloads the input and restores the seek position.
fn reload_input(&mut self) -> anyhow::Result<()> {
let seek_pos = self.input.stream_position()?;
self.input = crate::io::load_input()?;
self.input.seek(SeekFrom::Start(seek_pos))?;
Ok(())
}
fn read_sectors(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
let mut raw_buf = vec![crate::FB_NULL_VALUE; buf.len()];
let result = self.input.read(&mut raw_buf);
if result.is_err() {
return result;
} else if let Ok(mut bytes) = result
&& bytes >= CONFIG.sector_size
{
// Remember that this is integer division (floor division)
bytes = (bytes / CONFIG.sector_size) * CONFIG.sector_size;
buf.write_all(&raw_buf[..bytes]).unwrap();
return Ok(bytes);
} else {
return Ok(0);
}
self
}
}
#[cfg(test)]
#[allow(unused)]
mod tests {
use super::*;
// Test for Recover::set_buf_capacity
}
}

View File

@@ -1,17 +0,0 @@
// Acknowledge sister/child
mod module;
// std
use std::*;
// sister/child
use module1::*;
// parent
use super::*;
// ancestor of parent
use crate::*;
// external
use external::*;