From f1ab6648fc67ff8af274d2e65a400de9af0e9ddf Mon Sep 17 00:00:00 2001 From: Alex Crichton Date: Fri, 8 May 2015 22:37:49 -0700 Subject: [PATCH] rustc_back: Only use archive member filenames I've been working with some archives generated by MSVC's `lib.exe` tool lately, and it looks like the embedded name of the members in those archives sometimes have slahes in the name (e.g. `foo/bar/baz.obj`). Currently the compiler chokes on these paths as it assumes that each file in the archive is only the filename (which is what unix does). This commit interprets the name of each file in all archives as a path and then only uses the `file_name` portion of the path to extract the file to a separate location and then reassemble it back into a new archive later. Note that duplicate filenames are already handled, so this won't introduce any conflicts. --- src/librustc_back/archive.rs | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/src/librustc_back/archive.rs b/src/librustc_back/archive.rs index 37d784692fd0e..cad1522ee1344 100644 --- a/src/librustc_back/archive.rs +++ b/src/librustc_back/archive.rs @@ -306,6 +306,21 @@ impl<'a> ArchiveBuilder<'a> { if filename.contains(".SYMDEF") { continue } if skip(filename) { continue } + // Archives on unix systems typically do not have slashes in + // filenames as the `ar` utility generally only uses the last + // component of a path for the filename list in the archive. On + // Windows, however, archives assembled with `lib.exe` will preserve + // the full path to the file that was placed in the archive, + // including path separators. + // + // The code below is munging paths so it'll go wrong pretty quickly + // if there's some unexpected slashes in the filename, so here we + // just chop off everything but the filename component. Note that + // this can cause duplicate filenames, but that's also handled below + // as well. + let filename = Path::new(filename).file_name().unwrap() + .to_str().unwrap(); + // An archive can contain files of the same name multiple times, so // we need to be sure to not have them overwrite one another when we // extract them. Consequently we need to find a truly unique file