ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • [LLMs] LLaMA 기반 챗봇 언어모델, Vicuna 다운로드 및 사용법
    Machine Learning/Large Language Models 2023. 5. 1. 00:25
    728x90
    지난 포스트에서 Meta에서 공개한 LLMs 모델인 LLaMA의 다운로드 방법에 대해 알아보았습니다.
    이번 포스트에서는 LLaMA를 Finetuning 해 ChatGPT에 버금가는(90%) 챗봇 성능을 보여준
    Vicuna 모델의 다운로드 및 실행 방법을 소개합니다.

    Relative Response Quality Assessed by GPT-4 *

     

    Vicuna

    : An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality

    학습 방법 등 모델에 대한 자세한 설명은 아래 Post를 참고하시기 바랍니다.
    이번 포스트에서는 설치 및 실행 방법을 위주로 소개합니다.

    Vicuna (generated by stable diffusion 2.1)

    [ Github / Post / Demo ]

     

    Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality

    by the Team with members from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego

    vicuna.lmsys.org

     

    Vicuna 13B Demo, 참고 - https://github.com/lm-sys/FastChat

     

    1. Install

    conda env에서 해당 라이브러리 설치를 추천드립니다.

    pip3 install fschat

     

    2. Model Weights

    1) 우선 지난 포스트에서 설명한 것과 같이 LLaMA model license를 통해 LLaMA weights을 다운로드합니다.

    2) 그 후 Hugginface format으로 변환하는 과정을 거칩니다. [참고]

    아래와 같이 파일을 만들어 코드를 복붙한 뒤,

    convert_llama_weights_to_hf.py [참고] - 기존 코드에서 LlamaTokenizerFast -> LlamaTokenizer로 수정

    더보기
    # Copyright 2022 EleutherAI and The HuggingFace Inc. team. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    import argparse
    import gc
    import json
    import math
    import os
    import shutil
    import warnings
    
    import torch
    
    from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
    
    
    try:
        from transformers import LlamaTokenizer
    except ImportError as e:
        warnings.warn(e)
        warnings.warn(
            "The converted tokenizer will be the `slow` tokenizer. To use the fast, update your `tokenizers` library and re-run the tokenizer conversion"
        )
        LlamaTokenizerFast = None
    
    """
    Sample usage:
    
    ```
    python src/transformers/models/llama/convert_llama_weights_to_hf.py \
        --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
    ```
    
    Thereafter, models can be loaded via:
    
    ```py
    from transformers import LlamaForCausalLM, LlamaTokenizer
    
    model = LlamaForCausalLM.from_pretrained("/output/path")
    tokenizer = LlamaTokenizer.from_pretrained("/output/path")
    ```
    
    Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
    come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
    """
    
    INTERMEDIATE_SIZE_MAP = {
        "7B": 11008,
        # "13B": 13824,
        # "30B": 17920,
        # "65B": 22016,
    }
    NUM_SHARDS = {
        "7B": 1,
        # "13B": 2,
        # "30B": 4,
        # "65B": 8,
    }
    
    
    def compute_intermediate_size(n):
        return int(math.ceil(n * 8 / 3) + 255) // 256 * 256
    
    
    def read_json(path):
        with open(path, "r") as f:
            return json.load(f)
    
    
    def write_json(text, path):
        with open(path, "w") as f:
            json.dump(text, f)
    
    
    def write_model(model_path, input_base_path, model_size):
        os.makedirs(model_path, exist_ok=True)
        tmp_model_path = os.path.join(model_path, "tmp")
        os.makedirs(tmp_model_path, exist_ok=True)
    
        params = read_json(os.path.join(input_base_path, "params.json"))
        num_shards = NUM_SHARDS[model_size]
        n_layers = params["n_layers"]
        n_heads = params["n_heads"]
        n_heads_per_shard = n_heads // num_shards
        dim = params["dim"]
        dims_per_head = dim // n_heads
        base = 10000.0
        inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
    
        # permute for sliced rotary
        def permute(w):
            return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
    
        print(f"Fetching all parameters from the checkpoint at {input_base_path}.")
        # Load weights
        if model_size == "7B":
            # Not sharded
            # (The sharded implementation would also work, but this is simpler.)
            loaded = torch.load(os.path.join(input_base_path, "consolidated.00.pth"), map_location="cpu")
        else:
            # Sharded
            loaded = [
                torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pth"), map_location="cpu")
                for i in range(num_shards)
            ]
        param_count = 0
        index_dict = {"weight_map": {}}
        for layer_i in range(n_layers):
            filename = f"pytorch_model-{layer_i + 1}-of-{n_layers + 1}.bin"
            if model_size == "7B":
                # Unsharded
                state_dict = {
                    f"model.layers.{layer_i}.self_attn.q_proj.weight": permute(
                        loaded[f"layers.{layer_i}.attention.wq.weight"]
                    ),
                    f"model.layers.{layer_i}.self_attn.k_proj.weight": permute(
                        loaded[f"layers.{layer_i}.attention.wk.weight"]
                    ),
                    f"model.layers.{layer_i}.self_attn.v_proj.weight": loaded[f"layers.{layer_i}.attention.wv.weight"],
                    f"model.layers.{layer_i}.self_attn.o_proj.weight": loaded[f"layers.{layer_i}.attention.wo.weight"],
                    f"model.layers.{layer_i}.mlp.gate_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w1.weight"],
                    f"model.layers.{layer_i}.mlp.down_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w2.weight"],
                    f"model.layers.{layer_i}.mlp.up_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w3.weight"],
                    f"model.layers.{layer_i}.input_layernorm.weight": loaded[f"layers.{layer_i}.attention_norm.weight"],
                    f"model.layers.{layer_i}.post_attention_layernorm.weight": loaded[f"layers.{layer_i}.ffn_norm.weight"],
                }
            else:
                # Sharded
                # Note that in the 13B checkpoint, not cloning the two following weights will result in the checkpoint
                # becoming 37GB instead of 26GB for some reason.
                state_dict = {
                    f"model.layers.{layer_i}.input_layernorm.weight": loaded[0][
                        f"layers.{layer_i}.attention_norm.weight"
                    ].clone(),
                    f"model.layers.{layer_i}.post_attention_layernorm.weight": loaded[0][
                        f"layers.{layer_i}.ffn_norm.weight"
                    ].clone(),
                }
                state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute(
                    torch.cat(
                        [
                            loaded[i][f"layers.{layer_i}.attention.wq.weight"].view(n_heads_per_shard, dims_per_head, dim)
                            for i in range(num_shards)
                        ],
                        dim=0,
                    ).reshape(dim, dim)
                )
                state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute(
                    torch.cat(
                        [
                            loaded[i][f"layers.{layer_i}.attention.wk.weight"].view(n_heads_per_shard, dims_per_head, dim)
                            for i in range(num_shards)
                        ],
                        dim=0,
                    ).reshape(dim, dim)
                )
                state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = torch.cat(
                    [
                        loaded[i][f"layers.{layer_i}.attention.wv.weight"].view(n_heads_per_shard, dims_per_head, dim)
                        for i in range(num_shards)
                    ],
                    dim=0,
                ).reshape(dim, dim)
    
                state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = torch.cat(
                    [loaded[i][f"layers.{layer_i}.attention.wo.weight"] for i in range(num_shards)], dim=1
                )
                state_dict[f"model.layers.{layer_i}.mlp.gate_proj.weight"] = torch.cat(
                    [loaded[i][f"layers.{layer_i}.feed_forward.w1.weight"] for i in range(num_shards)], dim=0
                )
                state_dict[f"model.layers.{layer_i}.mlp.down_proj.weight"] = torch.cat(
                    [loaded[i][f"layers.{layer_i}.feed_forward.w2.weight"] for i in range(num_shards)], dim=1
                )
                state_dict[f"model.layers.{layer_i}.mlp.up_proj.weight"] = torch.cat(
                    [loaded[i][f"layers.{layer_i}.feed_forward.w3.weight"] for i in range(num_shards)], dim=0
                )
    
            state_dict[f"model.layers.{layer_i}.self_attn.rotary_emb.inv_freq"] = inv_freq
            for k, v in state_dict.items():
                index_dict["weight_map"][k] = filename
                param_count += v.numel()
            torch.save(state_dict, os.path.join(tmp_model_path, filename))
    
        filename = f"pytorch_model-{n_layers + 1}-of-{n_layers + 1}.bin"
        if model_size == "7B":
            # Unsharded
            state_dict = {
                "model.embed_tokens.weight": loaded["tok_embeddings.weight"],
                "model.norm.weight": loaded["norm.weight"],
                "lm_head.weight": loaded["output.weight"],
            }
        else:
            state_dict = {
                "model.norm.weight": loaded[0]["norm.weight"],
                "model.embed_tokens.weight": torch.cat(
                    [loaded[i]["tok_embeddings.weight"] for i in range(num_shards)], dim=1
                ),
                "lm_head.weight": torch.cat([loaded[i]["output.weight"] for i in range(num_shards)], dim=0),
            }
    
        for k, v in state_dict.items():
            index_dict["weight_map"][k] = filename
            param_count += v.numel()
        torch.save(state_dict, os.path.join(tmp_model_path, filename))
    
        # Write configs
        index_dict["metadata"] = {"total_size": param_count * 2}
        write_json(index_dict, os.path.join(tmp_model_path, "pytorch_model.bin.index.json"))
    
        config = LlamaConfig(
            hidden_size=dim,
            intermediate_size=compute_intermediate_size(dim),
            num_attention_heads=params["n_heads"],
            num_hidden_layers=params["n_layers"],
            rms_norm_eps=params["norm_eps"],
        )
        config.save_pretrained(tmp_model_path)
    
        # Make space so we can load the model properly now.
        del state_dict
        del loaded
        gc.collect()
    
        print("Loading the checkpoint in a Llama model.")
        model = LlamaForCausalLM.from_pretrained(tmp_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
        # Avoid saving this as part of the config.
        del model.config._name_or_path
    
        print("Saving in the Transformers format.")
        model.save_pretrained(model_path)
        shutil.rmtree(tmp_model_path)
    
    
    def write_tokenizer(tokenizer_path, input_tokenizer_path):
        # Initialize the tokenizer based on the `spm` model
        tokenizer_class = LlamaTokenizer if LlamaTokenizer is None else LlamaTokenizer
        print(f"Saving a {tokenizer_class.__name__} to {tokenizer_path}.")
        tokenizer = tokenizer_class(input_tokenizer_path)
        tokenizer.save_pretrained(tokenizer_path)
    
    
    def main():
        parser = argparse.ArgumentParser()
        parser.add_argument(
            "--input_dir",
            help="Location of LLaMA weights, which contains tokenizer.model and model folders",
        )
        parser.add_argument(
            "--model_size",
            # choices=["7B", "13B", "30B", "65B", "tokenizer_only"],
            choices= ["7B"]
        )
        parser.add_argument(
            "--output_dir",
            help="Location to write HF model and tokenizer",
        )
        args = parser.parse_args()
        if args.model_size != "tokenizer_only":
            write_model(
                model_path=args.output_dir,
                input_base_path=os.path.join(args.input_dir, args.model_size),
                model_size=args.model_size,
            )
        spm_path = os.path.join(args.input_dir, "tokenizer.model")
        write_tokenizer(args.output_dir, spm_path)
    
    
    if __name__ == "__main__":
        main()

     

    아래와 같은 명령을 실행합니다. 현재 명령어에서는 7B 예시를 보여주었지만 다른 더 큰 모델(13B 등)도 가능합니다.

    python convert_llama_weights_to_hf.py \
    	--input_dir /data/LLaMA/ckpt_7B \
        --model_size 7B --output_dir /data/LLaMA/HF

    이때 /data/LLaMA/ckpt_7B는 아래와 같은 상태여야 합니다. 

    위의 명령어를 실행하면 아래와 같이 hugginface format으로 변환을 완료할 수 있습니다.

    Fetching all parameters from the checkpoint at /data/LLaMA/ckpt_7B/7B.
    Loading the checkpoint in a Llama model.
    Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:05<00:00,  5.86it/s]
    Saving in the Transformers format.
    Saving a LlamaTokenizer to /data/LLaMA/HF/.

     

    3) Vicuna weights

    python3 -m fastchat.model.apply_delta \
    	--base-model-path /data/LLaMA/HF \
        --target-model-path /data/LLaMA/vicuna-7b \
        --delta-path lmsys/vicuna-7b-delta-v1.1

    위와 같은 명령을 실행하면 아래와 같이 vicuna weigths을 얻을 수 있습니다.

    Loading the delta weights from lmsys/vicuna-7b-delta-v1.1
    Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00,  1.89s/it]
    Loading the base model from /data/LLaMA/HF
    Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00,  1.79s/it]
    Applying the delta
    Applying delta: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 323/323 [00:01<00:00, 179.63it/s]
    Saving the target model to /data/LLaMA/vicuna-7b

     

     

    3. Vicuna 실행

    python3 -m fastchat.serve.cli --model-path /data/LLaMA/vicuna-7b

    USER: 라는 글자가 뜨면 원하는 질문을 던지면 됩니다.

    "Give me a 3day travel plan for Seoul" 이라는 질문을 하자 아래와 같이 답을 합니다.

     

    728x90

    댓글

Designed by Tistory.