Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 910b multi-card reasoning is very slow. #2534

Open
3 tasks done
the-nine-nation opened this issue Sep 29, 2024 · 1 comment
Open
3 tasks done

[Bug] 910b multi-card reasoning is very slow. #2534

the-nine-nation opened this issue Sep 29, 2024 · 1 comment
Assignees

Comments

@the-nine-nation
Copy link

the-nine-nation commented Sep 29, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

我用双卡910b进行推理时,速度比单卡910b慢了大约30%
用的torch_npu,测试模型是qwen2.5 7b instruct
单条长回复,单卡大约32token/s,双卡只有22token/s

Reproduction

我运行服务命令:lmdeploy serve api_server --backend pytorch --device ascend /home/ma-user/work/qwen2-7b --server-port 6007 --tp 2 --cache-max-entry-count=0.9

Environment

Warning : ASCEND_HOME_PATH environment variable is not set.
/home/ma-user/anaconda3/envs/py39/lib/python3.9/site-packages/torch_npu/utils/path_manager.py:82: UserWarning: Warning: The /home/ma-user/work owner does not match the current user.
  warnings.warn(f"Warning: The {path} owner does not match the current user.")
[W compiler_depend.ts:623] Warning: expandable_segments currently defaults to false. You can enable this feature by `export PYTORCH_NPU_ALLOC_CONF = expandable_segments:True`. (function operator())
[W compiler_depend.ts:631] Warning: expandable_segments feature is not supportted                     and the possible cause is that driver and firmware packages do not match. (function operator())
sys.platform: linux
Python: 3.9.20 | packaged by conda-forge | (main, Sep 22 2024, 14:02:18) [GCC 13.3.0]
CUDA available: False
MUSA available: False
numpy_random_seed: 2147483648
GCC: gcc (GCC) 7.3.0
PyTorch: 2.1.0
PyTorch compiling details: PyTorch built with:
  - GCC 10.2
  - C++ Version: 201703
  - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.16.0
LMDeploy: 0.6.0+
transformers: 4.44.2
gradio: 4.44.0
fastapi: 0.115.0
pydantic: 2.9.2
triton: Not Found

Error traceback

No response

@jinminxi104
Copy link
Collaborator

We think this issue can be solved by supporting MatmulAllReduce op.
We'll support MatmulAllReduce in Oct.

@jinminxi104 jinminxi104 self-assigned this Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants