Installation Error

Nov 27, 2015 at 7:43 AM
Hello,
I am trying to install the CNTK on a gpu device, but got errors when I tried to run "make all".
The errors are:
g++ -c Math/Math/CPUMatrix.cpp -o /home/tiangao/CNTK/cntk/build/.build/Math/Math/CPUMatrix.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DKALDI_DOUBLEPRECISION=0 -DHAVE_POSIX_MEMALIGN -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -DHAVE_OPENFST_GE_10400 -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new -Wno-error=literal-suffix -O4 -ICommon/Include -IMath/Math -IMachineLearning/CNTK -IMachineLearning/CNTKActionsLib -IMachineLearning/CNTKComputationNetworkLib -IMachineLearning/CNTKSGDLib -IMachineLearning/CNTKSequenceTrainingLib -IBrainScript -I/usr/./include/nvidia/gdk -I/usr/local/cuda-7.0/include/thrust/system/cuda/detail/cub -I/usr/local/cuda-7.0/include -I/home/tiangao/kaldi-trunk/src -I/home/tiangao/kaldi-trunk/tools/ATLAS/include -I/home/tiangao/kaldi-trunk/tools/openfst/include -MD -MP -MF /home/tiangao/CNTK/cntk/build/.build/Math/Math/CPUMatrix.d
Math/Math/CPUMatrix.cpp:47:52: fatal error: acml.h: No such file or directory
#include <acml.h> // requires ACML 5.3.1 and above
                                                ^
compilation terminated.
make[1]: *** [/home/tiangao/CNTK/cntk/build/.build/Math/Math/CPUMatrix.o] Error 1
make[1]: Leaving directory `/home/tiangao/CNTK/cntk'
make: *** [all] Error 2

In the configure file, I gave the acml path like this:
have_acml=yes
acml_path=/opt/acml5.3.1/ifort64_mp
acml_check=acml5.3.1/ifort64_mp/include/acml.h

I do not know how can I solve the problem.
Thank you very much!
Nov 27, 2015 at 2:36 PM
hello, I solved the problem by copying the acml.h to dir /src/local. However, after I ran the "make all" again, I got the following errors:

/usr/local/cuda-7.0/bin/nvcc -c Math/Math/GPUMatrix.cu -o /home/tiangao/CNTK/cntk/build/.build/Math/Math/GPUMatrix.o -std=c++11 -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -m 64 -O3 -use_fast_math -lineinfo -gencode arch=compute_20,code=\"sm_20,compute_20\" -gencode arch=compute_30,code=\"sm_30,compute_30\" -gencode arch=compute_35,code=\"sm_35,compute_35\" -gencode arch=compute_50,code=\"sm_50,compute_50\" -I Common/Include -I Math/Math -I MachineLearning/CNTK -I MachineLearning/CNTKActionsLib -I MachineLearning/CNTKComputationNetworkLib -I MachineLearning/CNTKSGDLib -I MachineLearning/CNTKSequenceTrainingLib -I BrainScript -I /usr/./include/nvidia/gdk -I /usr/local/cub -I /usr/local/cuda-7.0/include -I /include -I /home/tiangao/kaldi-trunk/src -I /home/tiangao/kaldi-trunk/tools/ATLAS/include -I /home/tiangao/kaldi-trunk/tools/openfst/include -Xcompiler "-fPIC -Werror"
Math/Math/GPUMatrix.cu(3096): error: no instance of function template "cub::DeviceRadixSort::SortPairsDescending" matches the argument list
argument types are: (std::nullptr_t, size_t, float , float , uint64_t , uint64_t , int32_t, int, unsigned long, cudaStream_t)
detected during instantiation of "void Microsoft::MSR::CNTK::GPUMatrix<ElemType>::VectorMax(Microsoft::MSR::CNTK::GPUMatrix<ElemType> &, Microsoft::MSR::CNTK::GPUMatrix<ElemType> &, __nv_bool, int) const [with ElemType=float]"

Thanks a lot!
Nov 27, 2015 at 3:25 PM
Edited Dec 1, 2015 at 11:33 AM
Hello, I have another question. When I use the option "readMethod=blockRandomize" to train DNN regression model, cn.exe will stop working. It works with the option "readMethod=rollingWindow". And it also works with the option "readMethod=blockRandomize" to train DNN classification model. So what is wrong with the CNTK version in my computer?

Thank you.
Dec 9, 2015 at 11:41 PM
Hi xiaoqing,

regarding your original question (the first two posts), have you run the /configure script from an appropriate directory to configure your build? What does it report? Please also note, that there are updated/new Linux build instruction here: http://cntk.codeplex.com/documentation

Thanks, Mark
Dec 12, 2015 at 3:27 PM
Edited Dec 13, 2015 at 2:23 PM
Hello, thank you for your reply. Now I successfully install the updated version of CNKT, and it works.