-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[core] parallel loading of shards #12028
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -310,6 +311,130 @@ def load_model_dict_into_meta( | |||
return offload_index, state_dict_index | |||
|
|||
|
|||
def check_support_param_buffer_assignment(model_to_load, state_dict, start_prefix=""): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved it here from modeling_utils.py
.
return offload_index, state_dict_index, mismatched_keys, error_msgs | ||
|
||
|
||
def _find_mismatched_keys( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same. Moved it out of modeling_utils.py
.
if len(resolved_model_file) > 1: | ||
resolved_model_file = logging.tqdm(resolved_model_file, desc="Loading checkpoint shards") | ||
|
||
mismatched_keys = [] | ||
assign_to_params_buffers = None | ||
error_msgs = [] | ||
|
||
for shard_file in resolved_model_file: | ||
state_dict = load_state_dict(shard_file, dduf_entries=dduf_entries) | ||
mismatched_keys += _find_mismatched_keys( | ||
state_dict, model_state_dict, loaded_keys, ignore_mismatched_sizes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been moved to load_shard_file()
.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
||
|
||
def load_shard_files_with_threadpool(args_list): | ||
num_workers = int(os.environ.get("HF_PARALLEL_LOADING_WORKERS", "8")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would add HF_PARALLEL_LOADING_WORKERS
as a constant at the top of the file for consistency.
args_list = [ | ||
( | ||
model, | ||
model_state_dict, | ||
shard_file, | ||
device_map, | ||
dtype, | ||
hf_quantizer, | ||
keep_in_fp32_modules, | ||
dduf_entries, | ||
loaded_keys, | ||
unexpected_keys, | ||
offload_index, | ||
offload_folder, | ||
state_dict_index, | ||
state_dict_folder, | ||
ignore_mismatched_sizes, | ||
low_cpu_mem_usage, | ||
) | ||
for shard_file in resolved_model_file | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the same arguments are used across the two loading functions, it's a good candidate for functools.partial
load_fn = partial(
load_shard_files_with_threadpool if is_parallel_loading_enabled else load_shard_file,
model=model,
model_state_dict=model_state_dict,
device_map=device_map,
dtype=dtype,
hf_quantizer=hf_quantizer,
keep_in_fp32_modules=keep_in_fp32_modules,
dduf_entries=dduf_entries,
loaded_keys=loaded_keys,
unexpected_keys=unexpected_keys,
offload_index=offload_index,
offload_folder=offload_folder,
state_dict_index=state_dict_index,
state_dict_folder=state_dict_folder,
ignore_mismatched_sizes=ignore_mismatched_sizes,
low_cpu_mem_usage=low_cpu_mem_usage,
)
if is_parallel_loading_enabled:
offload_index, state_dict_index, _mismatched_keys, _error_msgs = load_fn(
resolved_model_file,
)
error_msgs += _error_msgs
mismatched_keys += _mismatched_keys
else:
shard_files = resolved_model_file
if len(resolved_model_file) > 1:
shard_files = logging.tqdm(resolved_model_file, desc="Loading checkpoint shards")
for shard_file in resolved_model_file:
offload_index, state_dict_index, _mismatched_keys, _error_msgs = load_fn(shard_file)
error_msgs += _error_msgs
mismatched_keys += _mismatched_keys
Co-authored-by: Dhruv Nair <[email protected]>
@stevhliu, could you help add docs for this PR (separate PR is fine)? I think we could have some guidance on how to load a #11904 could also be mentioned in the document. Then we're working on #12122 |
@DN6 thanks a lot for your thoughtful suggestions. I have reflected them and I have added a test case, as well. LMK what you think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
@@ -43,6 +43,8 @@ | |||
DIFFUSERS_REQUEST_TIMEOUT = 60 | |||
DIFFUSERS_ATTN_BACKEND = os.getenv("DIFFUSERS_ATTN_BACKEND", "native") | |||
DIFFUSERS_ATTN_CHECKS = os.getenv("DIFFUSERS_ATTN_CHECKS", "0") in ENV_VARS_TRUE_VALUES | |||
DEFAULT_HF_PARALLEL_LOADING_WORKERS = 8 | |||
HF_PARALLEL_LOADING_FLAG = "HF_ENABLE_PARALLEL_LOADING" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to run the env check here
HF_ENABLE_PARALLEL_LOADING = os.environ.get("HF_ENABLE_PARALLEL_LOADING", "").upper() in ENV_VARS_TRUE_VALUES
Then import the constant into modeling_utils.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR: #12137
What does this PR do?
Similar to huggingface/transformers#36835.
`main`: time: 8.162s this branch: time: 5.663s
code