Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tarball): use rwlock instead of dashmap to avoid potential deadlock #200

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ async-recursion = { version = "1.0.5" }
clap = { version = "4", features = ["derive", "string"] }
command-extra = { version = "1.0.0" }
base64 = { version = "0.21.5" }
dashmap = { version = "5.5.3" }
derive_more = { version = "1.0.0-beta.3", features = ["full"] }
dunce = { version = "1.0.4" }
home = { version = "0.5.5" }
Expand Down
3 changes: 2 additions & 1 deletion crates/cli/src/state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ use pacquet_package_manifest::{PackageManifest, PackageManifestError};
use pacquet_tarball::MemCache;
use pipe_trait::Pipe;
use reqwest::Client;
use std::collections::HashMap;
use std::path::PathBuf;

/// Application state when running `pacquet run` or `pacquet install`.
Expand Down Expand Up @@ -44,7 +45,7 @@ impl State {
lockfile: call_load_lockfile(config.lockfile, Lockfile::load_from_current_dir)
.map_err(InitStateError::LoadLockfile)?,
http_client: Client::new(),
tarball_mem_cache: MemCache::new(),
tarball_mem_cache: MemCache::new(HashMap::new()),
KSXGitHub marked this conversation as resolved.
Show resolved Hide resolved
})
}
}
Expand Down
32 changes: 31 additions & 1 deletion crates/cli/tests/install.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ use command_extra::CommandExtra;
use pacquet_testing_utils::{
bin::{AddMockedRegistry, CommandTempCwd},
fs::{get_all_files, get_all_folders, is_symlink_or_junction},
panic_after,
};
use pipe_trait::Pipe;
use std::fs;
use std::{fs, thread, time::Duration};

#[test]
fn should_install_dependencies() {
Expand Down Expand Up @@ -131,3 +132,32 @@ fn should_install_index_files() {

drop((root, mock_instance)); // cleanup
}

#[test]
fn should_install_duplicated_dependencies() {
let CommandTempCwd { pacquet, root, workspace, .. } =
CommandTempCwd::init().add_mocked_registry();

eprintln!("Creating package.json...");
let manifest_path = workspace.join("package.json");
let package_json_content = serde_json::json!({
"dependencies": {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before rebase the branch main, i.e. without mock-registry, the test passed, but now it doesn't seem to be able to install express, so it will take a bit more time to see what's going on.

"express": "4.18.2",
},
"devDependencies": {
"express": "4.18.2",
},
});
fs::write(manifest_path, package_json_content.to_string()).expect("write to package.json");

panic_after(Duration::from_secs(30), move || {
thread::sleep(Duration::from_millis(200));
eprintln!("Executing command...");
pacquet.with_arg("install").assert().success();
eprintln!("Make sure the package is installed");
assert!(is_symlink_or_junction(&workspace.join("node_modules/express")).unwrap());
assert!(workspace.join("node_modules/.pnpm/[email protected]").exists());
});

drop(root); // cleanup
}
1 change: 0 additions & 1 deletion crates/tarball/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ pacquet-fs = { workspace = true }
pacquet-store-dir = { workspace = true }

base64 = { workspace = true }
dashmap = { workspace = true }
derive_more = { workspace = true }
miette = { workspace = true }
pipe-trait = { workspace = true }
Expand Down
18 changes: 12 additions & 6 deletions crates/tarball/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@ use std::{
collections::HashMap,
ffi::OsString,
io::{Cursor, Read},
mem::drop,
KSXGitHub marked this conversation as resolved.
Show resolved Hide resolved
path::PathBuf,
sync::Arc,
time::UNIX_EPOCH,
};

use base64::{engine::general_purpose::STANDARD as BASE64_STD, Engine};
use dashmap::DashMap;
use derive_more::{Display, Error, From};
use miette::Diagnostic;
use pacquet_fs::file_mode;
Expand Down Expand Up @@ -88,7 +88,7 @@ pub enum CacheValue {
/// Internal in-memory cache of tarballs.
///
/// The key of this hashmap is the url of each tarball.
pub type MemCache = DashMap<String, Arc<RwLock<CacheValue>>>;
pub type MemCache = RwLock<HashMap<String, Arc<RwLock<CacheValue>>>>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if MemoMap can be sent across async? Since the cache is growth only.

Copy link
Member Author

@await-ovo await-ovo Nov 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it is perfect for our scene. I just tried memo-map on this branch, and it works well.

I've also tried integrated-benchmark on the branch main, fix-mem-cache-deadlock, and refactor-use-memo-map, here is the result on my local machine:

  • fronzen-lockfile
$ just integrated-benchmark  fix-mem-cache-deadlock refactor-use-memo-map main -s=frozen-lockfile

Benchmark 1: pacquet@fix-mem-cache-deadlock
  Time (mean ± σ):     532.3 ms ± 189.3 ms    [User: 99.6 ms, System: 729.0 ms]
  Range (min … max):   390.8 ms … 1049.7 ms    10 runs

  Warning: The first benchmarking run for this command was significantly slower than the rest (1.050 s). This could be caused by (filesystem) caches that were not filled until after the first run. You are already using both the '--warmup' option as well as the '--prepare' option. Consider re-running the benchmark on a quiet system. Maybe it was a random outlier. Alternatively, consider increasing the warmup count.

Benchmark 2: pacquet@refactor-use-memo-map
  Time (mean ± σ):     517.7 ms ± 104.7 ms    [User: 110.5 ms, System: 764.0 ms]
  Range (min … max):   412.1 ms … 707.7 ms    10 runs

Benchmark 3: pacquet@main
  Time (mean ± σ):     517.9 ms ±  95.5 ms    [User: 113.1 ms, System: 801.1 ms]
  Range (min … max):   410.9 ms … 712.6 ms    10 runs

Summary
  pacquet@refactor-use-memo-map ran
    1.00 ± 0.27 times faster than pacquet@main
    1.03 ± 0.42 times faster than pacquet@fix-mem-cache-deadlock
  • clean-install
$ just integrated-benchmark  fix-mem-cache-deadlock refactor-use-memo-map main -s=clean-install

Benchmark 1: pacquet@fix-mem-cache-deadlock
  Time (mean ± σ):      7.350 s ±  0.156 s    [User: 0.587 s, System: 0.909 s]
  Range (min … max):    7.092 s …  7.603 s    10 runs

Benchmark 2: pacquet@refactor-use-memo-map
  Time (mean ± σ):      7.124 s ±  0.849 s    [User: 0.521 s, System: 0.868 s]
  Range (min … max):    6.713 s …  9.532 s    10 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs.

Benchmark 3: pacquet@main
 ⠦ Initial time measurement       ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ETA 00:00:00^C%
$ just integrated-benchmark  fix-mem-cache-deadlock refactor-use-memo-map  -s=clean-install
Building "refactor-use-memo-map"...
    Finished release [optimized] target(s) in 0.57s
Benchmark 1: pacquet@fix-mem-cache-deadlock
  Time (mean ± σ):      7.400 s ±  0.157 s    [User: 0.578 s, System: 0.936 s]
  Range (min … max):    7.207 s …  7.761 s    10 runs

Benchmark 2: pacquet@refactor-use-memo-map
  Time (mean ± σ):      7.706 s ±  1.574 s    [User: 0.544 s, System: 0.910 s]
  Range (min … max):    6.906 s … 12.145 s    10 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs.

Summary
  pacquet@fix-mem-cache-deadlock ran
    1.04 ± 0.21 times faster than pacquet@refactor-use-memo-map

It seems that integrated-benchmark on the branch main will get a deadlock currently, so there's no comparison to the main branch.

Overall, thanks for your suggestion, it would be better to use memo-map, If there's no problem on your end, I think we can just use it ~ I don't know if we should use memo-map here, it looks like RwLock for clean-install would be a little better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frozen lockfile doesn't use memory cache so it's irrelevant (It invokes run_without_mem_cache).

I don't know if we should use memo-map here, it looks like RwLock for clean-install would be a little better?

The integrated-benchmark script has a --fixture-dir/-D option, you may use it to specify a bigger package.json file.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this package.json:

{
  "dependencies": {
    "nanoid": "^3.3.6",
    "picocolors": "^1.0.0",
    "source-map-js": "^1.0.2"
  },
  "devDependencies": {
    "@size-limit/preset-small-lib": "^9.0.0",
    "@types/node": "^20.8.3",
    "c8": "^8.0.1",
    "check-dts": "^0.7.2",
    "clean-publish": "^4.2.0",
    "concat-with-sourcemaps": "^1.1.0",
    "nanodelay": "^1.0.8",
    "nanospy": "^1.0.0",
    "postcss-parser-tests": "^8.8.0",
    "simple-git-hooks": "^2.9.0",
    "size-limit": "^9.0.0",
    "strip-ansi": "^6.0.1",
    "ts-node": "^10.9.1",
    "typescript": "^5.2.2",
    "uvu": "^0.5.6"
  }
}

Run just integrated-benchmark fix-mem-cache-deadlock refactor-use-memo-map -s=clean-install -D=/Users/await-ovo/Documents/test/test-pnpm/alotta-modules show that:

List of branches:
Building "refactor-use-memo-map"...
    Finished release [optimized] target(s) in 0.47s
Benchmark 1: pacquet@fix-mem-cache-deadlock
  Time (mean ± σ):      3.943 s ±  0.205 s    [User: 3.193 s, System: 2.717 s]
  Range (min … max):    3.618 s …  4.192 s    10 runs

Benchmark 2: pacquet@refactor-use-memo-map
  Time (mean ± σ):      4.041 s ±  0.184 s    [User: 3.328 s, System: 2.843 s]
  Range (min … max):    3.818 s …  4.355 s    10 runs

Summary
  pacquet@fix-mem-cache-deadlock ran
    1.02 ± 0.07 times faster than pacquet@refactor-use-memo-map

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to use this package.json to do a benchmark, but there will be a Too many open files error during install.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try a smaller one, for example:

{
  "dependencies": {
    "@babel/core": "^7.23.2",
    "@tsfun/all": "0.0.37",
    "@types/node": "16.18.39",
    "cross-env": "7.0.3",
    "glob-parent": "6.0.2",
    "husky": "8.0.3",
    "jest": "29.7.0",
    "json5": "2.2.3",
    "jsonwebtoken": "9.0.2",
    "keyv": "4.5.3",
    "react": "18.2.0",
    "rimraf": "3.0.2",
    "semver": "7.5.4",
    "shx": "0.3.4",
    "ts-jest": "29.1.1",
    "ts-node": "10.9.1",
    "typescript": "5.2.2",
    "verdaccio": "5.27.0",
    "yaml": "2.3.4"
  }
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, this will get an error:

image

Maybe we should test the benchmark with a bigger repository after this is resolved.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you try combine your branches with #210 into 2 new branches and see if the "too many open files" error still exist? If the error disappear, can you try benchmark them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried, but I still can't get the results of integrated-benchmark, it seems to be because of a circular dependency in alotta-files?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zkochan It seems pacquet can't yet handle circular dependencies (without lockfile). Can you tell us which is the circular dependencies to remove?

Also, @await-ovo, another PR was just merged, you should pull from upstream or rebase.


#[instrument(skip(gz_data), fields(gz_data_len = gz_data.len()))]
fn decompress_gzip(gz_data: &[u8], unpacked_size: Option<usize>) -> Result<Vec<u8>, TarballError> {
Expand Down Expand Up @@ -130,9 +130,9 @@ impl<'a> DownloadTarballToStore<'a> {

// QUESTION: I see no copying from existing store_dir, is there such mechanism?
// TODO: If it's not implemented yet, implement it

if let Some(cache_lock) = mem_cache.get(package_url) {
let notify = match &*cache_lock.write().await {
let mem_cache_reader = mem_cache.read().await;
if let Some(cache_lock) = mem_cache_reader.get(package_url) {
let notify = match &*cache_lock.read().await {
CacheValue::Available(cas_paths) => {
return Ok(Arc::clone(cas_paths));
}
Expand All @@ -146,13 +146,19 @@ impl<'a> DownloadTarballToStore<'a> {
}
unreachable!("Failed to get or compute tarball data for {package_url:?}");
} else {
drop(mem_cache_reader);
let notify = Arc::new(Notify::new());
let cache_lock = notify
.pipe_ref(Arc::clone)
.pipe(CacheValue::InProgress)
.pipe(RwLock::new)
.pipe(Arc::new);
if mem_cache.insert(package_url.to_string(), Arc::clone(&cache_lock)).is_some() {
if mem_cache
.write()
.await
.insert(package_url.to_string(), Arc::clone(&cache_lock))
.is_some()
{
tracing::warn!(target: "pacquet::download", ?package_url, "Race condition detected when writing to cache");
}
let cas_paths = self.run_without_mem_cache().await?.pipe(Arc::new);
Expand Down
20 changes: 20 additions & 0 deletions crates/testing-utils/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,2 +1,22 @@
use std::{sync::mpsc, thread, time::Duration};
pub mod bin;
pub mod fs;

pub fn panic_after<T, F>(d: Duration, f: F) -> T
where
T: Send + 'static,
F: FnOnce() -> T,
F: Send + 'static,
{
let (done_tx, done_rx) = mpsc::channel();
let handle = thread::spawn(move || {
let val = f();
done_tx.send(()).expect("Unable to send completion signal");
val
});

match done_rx.recv_timeout(d) {
Ok(_) => handle.join().expect("test panicked"),
Err(_) => panic!("test timeout"),
}
}
KSXGitHub marked this conversation as resolved.
Show resolved Hide resolved
Loading