Conversation
|
Thanks for the PR. This is certainly impressive work! As a first step, I'd ask you to remove any merge commits from your git history, ie. make it linear. Furthermore, I'd suggest you squash the git history to a single (or few) commit(s). |
0849dc8 to
c06d0f9
Compare
oschuett
left a comment
There was a problem hiding this comment.
Thanks for fixing the git history. I left a couple of comments. Once everything is up and running it would also be good to add some regression tests.
| echo "libtensorflow_cc-${tfcc_ver}.tar.bz2 is found" | ||
| else | ||
| download_pkg ${DOWNLOADER_FLAGS} ${deepmd_sha256} \ | ||
| https://anaconda.org/deepmodeling/libtensorflow_cc/2.3.0/download/linux-64/libtensorflow_cc-${tfcc_ver}.tar.bz2 |
There was a problem hiding this comment.
Installing Tensorflow via Conda is a nice solution given how painful it is to build from source. However, I'm wondering why are you not using the "official" package from conda forge?
There was a problem hiding this comment.
Installing Tensorflow via Conda is a nice solution given how painful it is to build from source. However, I'm wondering why are you not using the "official" package from conda forge?
The packages used here are provided by Deep Modeling, the main developer group of DeePMD-kit. I will also try the package on conda-forge.
There was a problem hiding this comment.
The packages used here are provided by Deep Modeling, the main developer group of DeePMD-kit. I will also try the package on
conda-forge.
Failed because of the lost of include/google/protobuf/type.pb.h. The package on conda-forge seems not include some headers needed by DeePMD-kit.
There was a problem hiding this comment.
That header file is probably part of libprotobuf. So, I guess installing Conda packages by hand is not a very robust solution after all.
I think, then we should keep Tensorflow out of the toolchain for now. It's actually not that hard to install via pip and most HPC centers also provide it.
I've opened #1622 to make Tensorflow available in the CI. Once it's merged you can rebase this PR and continue testing.
| ;; | ||
| esac | ||
|
|
||
| cub_link="https://github.com/NVlabs/cub/archive/c3cceac115c072fb63df1836ff46d8c60d9eb304.zip" |
There was a problem hiding this comment.
I believe CUB is now included in the regular CUDA Toolkit. So, there is no need to install it separately anymore.
There was a problem hiding this comment.
Well, in DeePMD-kit repository, they just use the very fork of CUB as submodule, while removing it error would be raised when compiling. So here I tried to add it to DeePMD-kit code by downloading code archive file directly. According to your description, it might be OK building without it, and I will have a try.
There was a problem hiding this comment.
I believe CUB is now included in the regular CUDA Toolkit. So, there is no need to install it separately anymore.
In their latest commits, they remove extra CUB requirement for CUDA>=11, shown here, as it has been included in CUDA Toolkit. However, for former CUDA versions, it might still be needed.
There was a problem hiding this comment.
Ok, then let's keep it for a year or so until everybody has upgrade to CUDA >= 11.
| !> \param nonbonded ... | ||
| !> \param section ... | ||
| !> \param start ... | ||
| !> \author teo |
There was a problem hiding this comment.
You can put your own name here or remove the \author tag as you prefer.
| dcoord => dpmd_coord | ||
| dbox => dpmd_box | ||
| datype => dpmd_atype | ||
| ! IF (do_parallel) THEN |
There was a problem hiding this comment.
Left overs copied from QUIP? See also remainder of this routine.
| # | ||
| FCDEBFLAGS = ${FCDEBFLAGS} | ||
| CFLAGS = ${CFLAGS} | ||
| CXXFLAGS = ${CXXFLAGS} |
There was a problem hiding this comment.
Does DeePMD-kit also offer a C API or would it be easy to add? So, far we've managed to keep C++ out of CP2K and I'd really like to keep it that way.
| * \brief A minimal wrapper for DeepMD-kit C++ interface. | ||
| * \author Yongbin Zhuang and Yunpei Liu | ||
| ******************************************************************************/ | ||
|
|
There was a problem hiding this comment.
include guard #ifndef DEEPMOD_C_WRAPPER_H ... missing
| END DO | ||
| c_model(N + 1) = C_NULL_CHAR | ||
| #if(__DEEPMD) | ||
| create_nnp%ptr = create_nnp_c(c_model) |
There was a problem hiding this comment.
you should be able to simply pass along TRIM(model) // C_NULL_CHAR directly without having to copy, see for example src/pw/pw_fpga.F
| dener, dforce, dvirial, datom_ener, & | ||
| datom_virial, dcoord, datype, dbox) BIND(C, name="compute_nnp") | ||
| USE ISO_C_BINDING | ||
| TYPE(C_PTR), INTENT(IN), VALUE :: nnp, vecsize, dener, dforce, dvirial, & |
There was a problem hiding this comment.
Can you declare the proper types rather than simply declare everything as a C_PTR?
Also, I somehow doubt that everything is INTENT(IN).
| #if(__DEEPMD) | ||
| CALL compute_nnp_c(pot, C_LOC(vecsize), C_LOC(dener), C_LOC(dforce(1)), & | ||
| C_LOC(dvirial(1)), C_LOC(datom_ener(1)), C_LOC(datom_virial(1)), & | ||
| C_LOC(dcoord(1)), C_LOC(datype(1)), C_LOC(dbox(1))) |
There was a problem hiding this comment.
Wouldn't C_LOC work on the arrays directly rather than taking the first argument first?
Also, it should be possible to drop POINTER but they should likely to be declared CONTIGUOUS instead.
| TYPE(deepmd_data_type), POINTER :: deepmd_data | ||
| TYPE(deepmd_pot_type), POINTER :: deepmd | ||
| TYPE(nnp), SAVE :: deepmd_tmp_pot1, & | ||
| deepmd_tmp_pot2 |
There was a problem hiding this comment.
No. If you have to preserve data you have to add it to a suitable env. No global vars or shared state.
|
|
||
| PRIVATE | ||
| LOGICAL, SAVE, PRIVATE :: do_create(2) = .TRUE. | ||
| INTEGER, SAVE, PRIVATE :: do_count = 0 |
There was a problem hiding this comment.
No, see further down below.
|
Thanks for reviewing. We will check our code and modify then. |
|
Tensorflow is now readily available within the CI containers (#1622). So, please rebase your branch and adopt |
|
Gentle ping, is this PR still being worked on? |
|
@deepmodeling, does anybody know if there is still any interest in this PR? |
|
I'm going to close this PR now. Personally, I think it's a really pity because obviously a lot of work went into it. But I'm afraid this wouldn't be useful to anybody without further support from the authors. |
hi, @oschuett, |
|
@oschuett is it ok for opening a new subsection named DPNN/DPMD under FORCE_EVAL or nest the section in FORCE_EVAL/NNP |
|
Hi @robinzyb, this is great news!
Yes, please create new subsections below FORCE_EVAL, MOTION/MD, and MOTION/FREE_ENERGY as you see fit. |
Deep Potential (DP) is a machine learning potential based on deep neural network, first proposed by Zhang L. et al. (Doi: 10.1016/j.cpc.2018.03.016) By learning from coordinates as well as energy and forces of different structures from DFT calculations, a potential describing the whole system could be frozen to a file. By reading the file, program could yield energy, forces etc. from structures, and then MD simulation with DFT accuracy could be fastened greatly. This PR add Deep Potential support for CP2K, with potential file (e.g.
graph.pb) supplied, CP2K could run MD simulation.Only DeePMD-kit >= 2.0.0 is supportted.
To enable the DeePMD-kit feature,
--with-deepmdas well as--with-tfccflag in toolchain should not be set tono. The following ways installing has been added.--deepmd-modetogpuif GPU support is needed, and load CUDA environment before installing.$deepmd_rootand$tensorflow_root(Described here in detail), by setting--with-deepmd=$deepmd_rootand--with-tfcc=$tensorflow_root.install. For now, DeePMD-kit v2.0.0.b1 and Tensorflow C++ interface 2.3.0 will be installed, and only CUDA Toolkit 10.1 is OK from this way.ldcommand could read directly.The result from CP2K has been tested to be the same with which from Lammps, supportted by Deepmodeling.